Finding a 2x2 Matrix raised to the power of 1000
Let $A= pmatrix{1&4\ 3&2}$. Find $A^{1000}$.
Does this problem have to do with eigenvalues or is there another formula that is specific to 2x2 matrices?
linear-algebra matrices diagonalization
add a comment |
Let $A= pmatrix{1&4\ 3&2}$. Find $A^{1000}$.
Does this problem have to do with eigenvalues or is there another formula that is specific to 2x2 matrices?
linear-algebra matrices diagonalization
5
Eigenvalues should do it. Does a formula exist? Probably, but it would just be the process of diagonalizing, written out in one line.
– Eric Stucky
Dec 8 '13 at 2:17
Unless there is a pattern specific to the matrix, usually one has to the diagonalize it.
– David H
Dec 8 '13 at 2:18
Abstract duplicate? of math.stackexchange.com/questions/55285/…
– Eric Stucky
Dec 8 '13 at 2:23
add a comment |
Let $A= pmatrix{1&4\ 3&2}$. Find $A^{1000}$.
Does this problem have to do with eigenvalues or is there another formula that is specific to 2x2 matrices?
linear-algebra matrices diagonalization
Let $A= pmatrix{1&4\ 3&2}$. Find $A^{1000}$.
Does this problem have to do with eigenvalues or is there another formula that is specific to 2x2 matrices?
linear-algebra matrices diagonalization
linear-algebra matrices diagonalization
edited Dec 8 '13 at 2:23
Julien
38.5k358129
38.5k358129
asked Dec 8 '13 at 2:16
user114220
59117
59117
5
Eigenvalues should do it. Does a formula exist? Probably, but it would just be the process of diagonalizing, written out in one line.
– Eric Stucky
Dec 8 '13 at 2:17
Unless there is a pattern specific to the matrix, usually one has to the diagonalize it.
– David H
Dec 8 '13 at 2:18
Abstract duplicate? of math.stackexchange.com/questions/55285/…
– Eric Stucky
Dec 8 '13 at 2:23
add a comment |
5
Eigenvalues should do it. Does a formula exist? Probably, but it would just be the process of diagonalizing, written out in one line.
– Eric Stucky
Dec 8 '13 at 2:17
Unless there is a pattern specific to the matrix, usually one has to the diagonalize it.
– David H
Dec 8 '13 at 2:18
Abstract duplicate? of math.stackexchange.com/questions/55285/…
– Eric Stucky
Dec 8 '13 at 2:23
5
5
Eigenvalues should do it. Does a formula exist? Probably, but it would just be the process of diagonalizing, written out in one line.
– Eric Stucky
Dec 8 '13 at 2:17
Eigenvalues should do it. Does a formula exist? Probably, but it would just be the process of diagonalizing, written out in one line.
– Eric Stucky
Dec 8 '13 at 2:17
Unless there is a pattern specific to the matrix, usually one has to the diagonalize it.
– David H
Dec 8 '13 at 2:18
Unless there is a pattern specific to the matrix, usually one has to the diagonalize it.
– David H
Dec 8 '13 at 2:18
Abstract duplicate? of math.stackexchange.com/questions/55285/…
– Eric Stucky
Dec 8 '13 at 2:23
Abstract duplicate? of math.stackexchange.com/questions/55285/…
– Eric Stucky
Dec 8 '13 at 2:23
add a comment |
6 Answers
6
active
oldest
votes
You should diagonalize it, if possible.
This means to find the eigenvalues and eigenvectors.
- The eigenvalues are roots of the characteristic polynomial of $A$, which is
$$detpmatrix{1-x&4\3&2-x}=(1-x)(2-x)-12=x^2-3x-10,.$$ - If you find its roots, (which together sum up to $3$ and multiply to $-10$), then find an eigenvector for both ($v_1$ and $v_2$), [e.g. $v_2:=pmatrix{1\1}$ will be an eigenvector].
- Then build the matrix $P$ with columns $(v_1|v_2)$, and calculate its inverse.
- Finally, $D:=P^{-1}AP$ will be the diagonal containing the eigenvalues, because $$AP=A,(v_1|v_2)=(lambda_1 v_1,|,lambda_2 v_2) = P,pmatrix{lambda_1&0\
0&lambda_2}$$
And after all these, you can easily raise $A$ to any power:
$$A^{1000}=PD^{1000}P^{-1},.$$
add a comment |
Check that you get $A=PBP^{-1}$ with $B$ a diagonal matrix with entries $-2$ and $5$, and $P$ an invertible matrix. Then note that $A^2=(PBP^{-1})(PBP^{-1})=PB^2P^{-1}$, and in general you get $$A^n=PB^nP^{-1}$$
The right hand side is easy to deal with since the power of a diagonal matrix is very easy to see :)
add a comment |
The well-known strategy is to diagonalize since this matrix is diagonalizable.
For a change, here is a slightly different approach which is convenient for small matrices. If you have time, try to perform both methods until the end without computer help. This is fairly equivalent. Although you have two linear systems to solve for eigenvectors, and only one here for the constants $c,d$ below.
We will compute $A^n$.
Let $p_A(X)=X^2-3X-10$ be the characteristic polynomial of the matrix $A$. By Cayley-Hamilton, $p_A(A)=0$. So it suffices to determine the degree $leq 1$ remainder in the Euclidean divison of $X^{n}$ by $p_A(X)$ to compute $A^{n}$.
For every $ngeq 1$, denote
$$
X^n=q_n(X)p_A(X)+a_nX+b_n
$$
the Euclidean division of $X^n$ by $p_A(X)$.
Since $X^2equiv 3X+10$ modulo $p_A(X)$, multiplication of the latter by $X$ yields
$$
cases{a_{n=1}=3a_n+b_n \b_{n+1}=10a_n}iffcases{a_{n+1}=3a_n+10a_{n-1}\b_{n+1}=10a_n}
$$
Solving for $a_n$ is straightforward given the theory of recurrent linear homogeneous sequences. Given that the roots of the characteristic equation (which is of course $p_A(X)=0$) are $-2$ and $5$, we have $a_n=c (-2)^n+d 5^n$. Considering the initial cases $n=0,1$, we find $c$ and $d$ whence
$$a_n=frac{5^n-(-2)^{n}}{7}quadmbox{whence} quad b_n=frac{2cdot 5^{n}+5(-2)^{n}}{7}$$
Therefore
$$
A^n=q_n(A)p_A(A)+a_nA+b_nI_2=a_nA+b_nI_2=pmatrix{a_n+b_n&4a_n\3a_n& 2a_n+b_n}
$$
hence $A^n$ is equal to
$$A^n=pmatrix{frac{3cdot 5^{n}+ (-2)^{n+2}}{7} & frac{4cdot 5^{n}- (-2)^{n+2}}{7} \ frac{3cdot 5^{n}-3cdot (-2)^{n}}{7} & frac{4cdot 5^{n}+3cdot (-2)^{n}}{7}}$$
add a comment |
Perform an eigenvalue decomposition of $A$, we then get
$$A =
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}
begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}^{-1}
=VDV^{-1}
$$
where $V = begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}$ and $D = begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}$.
Hence,
$$A^n = underbrace{left(VDV^{-1} right)left(VDV^{-1} right)cdots left(VDV^{-1} right)}_{n text{ times}} = VD^n V^{-1}$$
This is a Jordan decomposition, not an eigenvalue decomposition (nor does an eigenvalue decomposition with a diagonal $D$ exist for non-symmetric/non-Hermitian matrix $A$), because $$V^*V = begin{bmatrix} 1 & (5sqrt{2})^{-1} \ (5sqrt{2})^{-1} & 1 end{bmatrix} ne {rm I}_2.$$
– Vedran Šego
Dec 8 '13 at 3:28
add a comment |
$newcommand{+}{^{dagger}}%
newcommand{angles}[1]{leftlangle #1 rightrangle}%
newcommand{braces}[1]{leftlbrace #1 rightrbrace}%
newcommand{bracks}[1]{leftlbrack #1 rightrbrack}%
newcommand{ceil}[1]{,leftlceil #1 rightrceil,}%
newcommand{dd}{{rm d}}%
newcommand{ds}[1]{displaystyle{#1}}%
newcommand{equalby}[1]{{#1 atop {= atop vphantom{huge A}}}}%
newcommand{expo}[1]{,{rm e}^{#1},}%
newcommand{fermi}{,{rm f}}%
newcommand{floor}[1]{,leftlfloor #1 rightrfloor,}%
newcommand{half}{{1 over 2}}%
newcommand{ic}{{rm i}}%
newcommand{iff}{Longleftrightarrow}
newcommand{imp}{Longrightarrow}%
newcommand{isdiv}{,left.rightvert,}%
newcommand{ket}[1]{leftvert #1rightrangle}%
newcommand{ol}[1]{overline{#1}}%
newcommand{pars}[1]{left( #1 right)}%
newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
newcommand{pp}{{cal P}}%
newcommand{root}[2]{,sqrt[#1]{,#2,},}%
newcommand{sech}{,{rm sech}}%
newcommand{sgn}{,{rm sgn}}%
newcommand{totald}[3]{frac{{rm d}^{#1} #2}{{rm d} #3^{#1}}}
newcommand{ul}[1]{underline{#1}}%
newcommand{verts}[1]{leftvert, #1 ,rightvert}$
$$
A = pars{begin{array}{cc}1 & 4 \ 3 & 2end{array}}
=
{3 over 2}
overbrace{pars{begin{array}{cc}1 & 0 \ 0 & 1end{array}}}^{ds{1}}
+
overbrace{{7 over 2}}^{ds{b_{x}}}
overbrace{pars{begin{array}{cc}0 & 1 \ 1 & 0end{array}}}^{ds{sigma_{x}}}
+
overbrace{half,ic}^{ds{b_{y}}}
overbrace{pars{begin{array}{cc}0 & -ic \ ic & 0end{array}}}^{ds{sigma_{y}}}
overbrace{-
half}^{ds{b_{z}}}overbrace{pars{begin{array}{cc}1 & 0 \ 1 & -1end{array}}}^{ds{sigma_{z}}}
=
{3 over 2} + vec{b}cdotvec{sigma}
$$
where $braces{sigma_{ell},, ell = x, y, z}$ are the
Pauli matrices. Then,
$expo{At} = expo{3t/2}expo{vec{b}cdotvec{sigma}t}$. However, $vec{b}cdotvec{sigma}vec{b}cdotvec{sigma} = vec{b}cdotvec{b} = 49/4$ such that
$$
pars{totald[2]{}{t} - {49 over 4}}expo{vec{b}cdotvec{sigma}t} = 0
quadimpquad
expo{vec{b}cdotvec{sigma}t} = muexpo{7t/2} + nuexpo{-7t/2},,quad
mu, nu mbox{are constants},
$$
with $1 = mu + nu$ and $2vec{b}cdotvec{sigma}/7 = mu - nu$ such that
$mu = 1/2 + vec{b}cdotvec{sigma}/7$ and
$nu = 1/2 - vec{b}cdotvec{sigma}/7$:
$$
expo{At} = muexpo{5t} + nuexpo{-2t},,
quadsum_{n = 0}^{infty}{t^{n} over n!},A^{n}
=
sum_{n = 0}^{infty}{t^{n} over n!},
bracks{5^{n}mu + pars{-1}^{n}2^{n}nu}
$$
$$color{#0000ff}{large%
leftlbrace%
begin{array}{rcl}
A^{n} = 5^{n}mu + pars{-1}^{n}2^{n}nu
& = &
halfbracks{5^{n} + pars{-1}^{n}2^{n}}
+
{1 over 7}bracks{5^{n} - pars{-1}^{n}2^{n}}vec{b}cdotvec{sigma}
\[3mm]
vec{b}cdotvec{sigma}
& = &
A - {3 over 2}
=
pars{begin{array}{cc}-1/2 & 5/2 \ 3/2 & 1/2end{array}}
=
halfpars{begin{array}{cc}-1 & 5 \ 3 & 1end{array}}
end{array}right.}
$$
add a comment |
Two additional methods that you can use once you know the eigenvalues $lambda_1$ and $lambda_2$:
- Decompose $A$ into $lambda_1P_1+lambda_2P_2$, where $P_1 = {A-lambda_2Ioverlambda_2-lambda_1}$ and $P_2={A-lambda_1Ioverlambda_1-lambda_2}$ are projections onto the corresponding eigenspaces with $P_1P_2=P_2P_1=0$. If you expand $A^{1000}$ using the binomial theorem, you’ll find that all but two terms vanish, giving $lambda_1^{1000}P_1+lambda_2^{1000}P_2$.
- Use the Cayley-Hamilton theorem to write $A^{1000}=aI+bA$ for some undetermined coefficients $a$ and $b$, then use the fact that this equation is also satisfied by the eigenvalues, which gives you the system of linear equations $a+blambda_i=lambda_i^{100}$ to solve for $a$ and $b$.
When $A$ has repeated eigenvalues, you’ll need to modify the above methods a bit, but the underlying ideas are still the same.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f597602%2ffinding-a-2x2-matrix-raised-to-the-power-of-1000%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
You should diagonalize it, if possible.
This means to find the eigenvalues and eigenvectors.
- The eigenvalues are roots of the characteristic polynomial of $A$, which is
$$detpmatrix{1-x&4\3&2-x}=(1-x)(2-x)-12=x^2-3x-10,.$$ - If you find its roots, (which together sum up to $3$ and multiply to $-10$), then find an eigenvector for both ($v_1$ and $v_2$), [e.g. $v_2:=pmatrix{1\1}$ will be an eigenvector].
- Then build the matrix $P$ with columns $(v_1|v_2)$, and calculate its inverse.
- Finally, $D:=P^{-1}AP$ will be the diagonal containing the eigenvalues, because $$AP=A,(v_1|v_2)=(lambda_1 v_1,|,lambda_2 v_2) = P,pmatrix{lambda_1&0\
0&lambda_2}$$
And after all these, you can easily raise $A$ to any power:
$$A^{1000}=PD^{1000}P^{-1},.$$
add a comment |
You should diagonalize it, if possible.
This means to find the eigenvalues and eigenvectors.
- The eigenvalues are roots of the characteristic polynomial of $A$, which is
$$detpmatrix{1-x&4\3&2-x}=(1-x)(2-x)-12=x^2-3x-10,.$$ - If you find its roots, (which together sum up to $3$ and multiply to $-10$), then find an eigenvector for both ($v_1$ and $v_2$), [e.g. $v_2:=pmatrix{1\1}$ will be an eigenvector].
- Then build the matrix $P$ with columns $(v_1|v_2)$, and calculate its inverse.
- Finally, $D:=P^{-1}AP$ will be the diagonal containing the eigenvalues, because $$AP=A,(v_1|v_2)=(lambda_1 v_1,|,lambda_2 v_2) = P,pmatrix{lambda_1&0\
0&lambda_2}$$
And after all these, you can easily raise $A$ to any power:
$$A^{1000}=PD^{1000}P^{-1},.$$
add a comment |
You should diagonalize it, if possible.
This means to find the eigenvalues and eigenvectors.
- The eigenvalues are roots of the characteristic polynomial of $A$, which is
$$detpmatrix{1-x&4\3&2-x}=(1-x)(2-x)-12=x^2-3x-10,.$$ - If you find its roots, (which together sum up to $3$ and multiply to $-10$), then find an eigenvector for both ($v_1$ and $v_2$), [e.g. $v_2:=pmatrix{1\1}$ will be an eigenvector].
- Then build the matrix $P$ with columns $(v_1|v_2)$, and calculate its inverse.
- Finally, $D:=P^{-1}AP$ will be the diagonal containing the eigenvalues, because $$AP=A,(v_1|v_2)=(lambda_1 v_1,|,lambda_2 v_2) = P,pmatrix{lambda_1&0\
0&lambda_2}$$
And after all these, you can easily raise $A$ to any power:
$$A^{1000}=PD^{1000}P^{-1},.$$
You should diagonalize it, if possible.
This means to find the eigenvalues and eigenvectors.
- The eigenvalues are roots of the characteristic polynomial of $A$, which is
$$detpmatrix{1-x&4\3&2-x}=(1-x)(2-x)-12=x^2-3x-10,.$$ - If you find its roots, (which together sum up to $3$ and multiply to $-10$), then find an eigenvector for both ($v_1$ and $v_2$), [e.g. $v_2:=pmatrix{1\1}$ will be an eigenvector].
- Then build the matrix $P$ with columns $(v_1|v_2)$, and calculate its inverse.
- Finally, $D:=P^{-1}AP$ will be the diagonal containing the eigenvalues, because $$AP=A,(v_1|v_2)=(lambda_1 v_1,|,lambda_2 v_2) = P,pmatrix{lambda_1&0\
0&lambda_2}$$
And after all these, you can easily raise $A$ to any power:
$$A^{1000}=PD^{1000}P^{-1},.$$
answered Dec 8 '13 at 2:35
Berci
59.7k23672
59.7k23672
add a comment |
add a comment |
Check that you get $A=PBP^{-1}$ with $B$ a diagonal matrix with entries $-2$ and $5$, and $P$ an invertible matrix. Then note that $A^2=(PBP^{-1})(PBP^{-1})=PB^2P^{-1}$, and in general you get $$A^n=PB^nP^{-1}$$
The right hand side is easy to deal with since the power of a diagonal matrix is very easy to see :)
add a comment |
Check that you get $A=PBP^{-1}$ with $B$ a diagonal matrix with entries $-2$ and $5$, and $P$ an invertible matrix. Then note that $A^2=(PBP^{-1})(PBP^{-1})=PB^2P^{-1}$, and in general you get $$A^n=PB^nP^{-1}$$
The right hand side is easy to deal with since the power of a diagonal matrix is very easy to see :)
add a comment |
Check that you get $A=PBP^{-1}$ with $B$ a diagonal matrix with entries $-2$ and $5$, and $P$ an invertible matrix. Then note that $A^2=(PBP^{-1})(PBP^{-1})=PB^2P^{-1}$, and in general you get $$A^n=PB^nP^{-1}$$
The right hand side is easy to deal with since the power of a diagonal matrix is very easy to see :)
Check that you get $A=PBP^{-1}$ with $B$ a diagonal matrix with entries $-2$ and $5$, and $P$ an invertible matrix. Then note that $A^2=(PBP^{-1})(PBP^{-1})=PB^2P^{-1}$, and in general you get $$A^n=PB^nP^{-1}$$
The right hand side is easy to deal with since the power of a diagonal matrix is very easy to see :)
answered Dec 8 '13 at 2:22
Daniel Montealegre
4,7731443
4,7731443
add a comment |
add a comment |
The well-known strategy is to diagonalize since this matrix is diagonalizable.
For a change, here is a slightly different approach which is convenient for small matrices. If you have time, try to perform both methods until the end without computer help. This is fairly equivalent. Although you have two linear systems to solve for eigenvectors, and only one here for the constants $c,d$ below.
We will compute $A^n$.
Let $p_A(X)=X^2-3X-10$ be the characteristic polynomial of the matrix $A$. By Cayley-Hamilton, $p_A(A)=0$. So it suffices to determine the degree $leq 1$ remainder in the Euclidean divison of $X^{n}$ by $p_A(X)$ to compute $A^{n}$.
For every $ngeq 1$, denote
$$
X^n=q_n(X)p_A(X)+a_nX+b_n
$$
the Euclidean division of $X^n$ by $p_A(X)$.
Since $X^2equiv 3X+10$ modulo $p_A(X)$, multiplication of the latter by $X$ yields
$$
cases{a_{n=1}=3a_n+b_n \b_{n+1}=10a_n}iffcases{a_{n+1}=3a_n+10a_{n-1}\b_{n+1}=10a_n}
$$
Solving for $a_n$ is straightforward given the theory of recurrent linear homogeneous sequences. Given that the roots of the characteristic equation (which is of course $p_A(X)=0$) are $-2$ and $5$, we have $a_n=c (-2)^n+d 5^n$. Considering the initial cases $n=0,1$, we find $c$ and $d$ whence
$$a_n=frac{5^n-(-2)^{n}}{7}quadmbox{whence} quad b_n=frac{2cdot 5^{n}+5(-2)^{n}}{7}$$
Therefore
$$
A^n=q_n(A)p_A(A)+a_nA+b_nI_2=a_nA+b_nI_2=pmatrix{a_n+b_n&4a_n\3a_n& 2a_n+b_n}
$$
hence $A^n$ is equal to
$$A^n=pmatrix{frac{3cdot 5^{n}+ (-2)^{n+2}}{7} & frac{4cdot 5^{n}- (-2)^{n+2}}{7} \ frac{3cdot 5^{n}-3cdot (-2)^{n}}{7} & frac{4cdot 5^{n}+3cdot (-2)^{n}}{7}}$$
add a comment |
The well-known strategy is to diagonalize since this matrix is diagonalizable.
For a change, here is a slightly different approach which is convenient for small matrices. If you have time, try to perform both methods until the end without computer help. This is fairly equivalent. Although you have two linear systems to solve for eigenvectors, and only one here for the constants $c,d$ below.
We will compute $A^n$.
Let $p_A(X)=X^2-3X-10$ be the characteristic polynomial of the matrix $A$. By Cayley-Hamilton, $p_A(A)=0$. So it suffices to determine the degree $leq 1$ remainder in the Euclidean divison of $X^{n}$ by $p_A(X)$ to compute $A^{n}$.
For every $ngeq 1$, denote
$$
X^n=q_n(X)p_A(X)+a_nX+b_n
$$
the Euclidean division of $X^n$ by $p_A(X)$.
Since $X^2equiv 3X+10$ modulo $p_A(X)$, multiplication of the latter by $X$ yields
$$
cases{a_{n=1}=3a_n+b_n \b_{n+1}=10a_n}iffcases{a_{n+1}=3a_n+10a_{n-1}\b_{n+1}=10a_n}
$$
Solving for $a_n$ is straightforward given the theory of recurrent linear homogeneous sequences. Given that the roots of the characteristic equation (which is of course $p_A(X)=0$) are $-2$ and $5$, we have $a_n=c (-2)^n+d 5^n$. Considering the initial cases $n=0,1$, we find $c$ and $d$ whence
$$a_n=frac{5^n-(-2)^{n}}{7}quadmbox{whence} quad b_n=frac{2cdot 5^{n}+5(-2)^{n}}{7}$$
Therefore
$$
A^n=q_n(A)p_A(A)+a_nA+b_nI_2=a_nA+b_nI_2=pmatrix{a_n+b_n&4a_n\3a_n& 2a_n+b_n}
$$
hence $A^n$ is equal to
$$A^n=pmatrix{frac{3cdot 5^{n}+ (-2)^{n+2}}{7} & frac{4cdot 5^{n}- (-2)^{n+2}}{7} \ frac{3cdot 5^{n}-3cdot (-2)^{n}}{7} & frac{4cdot 5^{n}+3cdot (-2)^{n}}{7}}$$
add a comment |
The well-known strategy is to diagonalize since this matrix is diagonalizable.
For a change, here is a slightly different approach which is convenient for small matrices. If you have time, try to perform both methods until the end without computer help. This is fairly equivalent. Although you have two linear systems to solve for eigenvectors, and only one here for the constants $c,d$ below.
We will compute $A^n$.
Let $p_A(X)=X^2-3X-10$ be the characteristic polynomial of the matrix $A$. By Cayley-Hamilton, $p_A(A)=0$. So it suffices to determine the degree $leq 1$ remainder in the Euclidean divison of $X^{n}$ by $p_A(X)$ to compute $A^{n}$.
For every $ngeq 1$, denote
$$
X^n=q_n(X)p_A(X)+a_nX+b_n
$$
the Euclidean division of $X^n$ by $p_A(X)$.
Since $X^2equiv 3X+10$ modulo $p_A(X)$, multiplication of the latter by $X$ yields
$$
cases{a_{n=1}=3a_n+b_n \b_{n+1}=10a_n}iffcases{a_{n+1}=3a_n+10a_{n-1}\b_{n+1}=10a_n}
$$
Solving for $a_n$ is straightforward given the theory of recurrent linear homogeneous sequences. Given that the roots of the characteristic equation (which is of course $p_A(X)=0$) are $-2$ and $5$, we have $a_n=c (-2)^n+d 5^n$. Considering the initial cases $n=0,1$, we find $c$ and $d$ whence
$$a_n=frac{5^n-(-2)^{n}}{7}quadmbox{whence} quad b_n=frac{2cdot 5^{n}+5(-2)^{n}}{7}$$
Therefore
$$
A^n=q_n(A)p_A(A)+a_nA+b_nI_2=a_nA+b_nI_2=pmatrix{a_n+b_n&4a_n\3a_n& 2a_n+b_n}
$$
hence $A^n$ is equal to
$$A^n=pmatrix{frac{3cdot 5^{n}+ (-2)^{n+2}}{7} & frac{4cdot 5^{n}- (-2)^{n+2}}{7} \ frac{3cdot 5^{n}-3cdot (-2)^{n}}{7} & frac{4cdot 5^{n}+3cdot (-2)^{n}}{7}}$$
The well-known strategy is to diagonalize since this matrix is diagonalizable.
For a change, here is a slightly different approach which is convenient for small matrices. If you have time, try to perform both methods until the end without computer help. This is fairly equivalent. Although you have two linear systems to solve for eigenvectors, and only one here for the constants $c,d$ below.
We will compute $A^n$.
Let $p_A(X)=X^2-3X-10$ be the characteristic polynomial of the matrix $A$. By Cayley-Hamilton, $p_A(A)=0$. So it suffices to determine the degree $leq 1$ remainder in the Euclidean divison of $X^{n}$ by $p_A(X)$ to compute $A^{n}$.
For every $ngeq 1$, denote
$$
X^n=q_n(X)p_A(X)+a_nX+b_n
$$
the Euclidean division of $X^n$ by $p_A(X)$.
Since $X^2equiv 3X+10$ modulo $p_A(X)$, multiplication of the latter by $X$ yields
$$
cases{a_{n=1}=3a_n+b_n \b_{n+1}=10a_n}iffcases{a_{n+1}=3a_n+10a_{n-1}\b_{n+1}=10a_n}
$$
Solving for $a_n$ is straightforward given the theory of recurrent linear homogeneous sequences. Given that the roots of the characteristic equation (which is of course $p_A(X)=0$) are $-2$ and $5$, we have $a_n=c (-2)^n+d 5^n$. Considering the initial cases $n=0,1$, we find $c$ and $d$ whence
$$a_n=frac{5^n-(-2)^{n}}{7}quadmbox{whence} quad b_n=frac{2cdot 5^{n}+5(-2)^{n}}{7}$$
Therefore
$$
A^n=q_n(A)p_A(A)+a_nA+b_nI_2=a_nA+b_nI_2=pmatrix{a_n+b_n&4a_n\3a_n& 2a_n+b_n}
$$
hence $A^n$ is equal to
$$A^n=pmatrix{frac{3cdot 5^{n}+ (-2)^{n+2}}{7} & frac{4cdot 5^{n}- (-2)^{n+2}}{7} \ frac{3cdot 5^{n}-3cdot (-2)^{n}}{7} & frac{4cdot 5^{n}+3cdot (-2)^{n}}{7}}$$
edited Dec 8 '13 at 4:08
answered Dec 8 '13 at 3:16
Julien
38.5k358129
38.5k358129
add a comment |
add a comment |
Perform an eigenvalue decomposition of $A$, we then get
$$A =
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}
begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}^{-1}
=VDV^{-1}
$$
where $V = begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}$ and $D = begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}$.
Hence,
$$A^n = underbrace{left(VDV^{-1} right)left(VDV^{-1} right)cdots left(VDV^{-1} right)}_{n text{ times}} = VD^n V^{-1}$$
This is a Jordan decomposition, not an eigenvalue decomposition (nor does an eigenvalue decomposition with a diagonal $D$ exist for non-symmetric/non-Hermitian matrix $A$), because $$V^*V = begin{bmatrix} 1 & (5sqrt{2})^{-1} \ (5sqrt{2})^{-1} & 1 end{bmatrix} ne {rm I}_2.$$
– Vedran Šego
Dec 8 '13 at 3:28
add a comment |
Perform an eigenvalue decomposition of $A$, we then get
$$A =
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}
begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}^{-1}
=VDV^{-1}
$$
where $V = begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}$ and $D = begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}$.
Hence,
$$A^n = underbrace{left(VDV^{-1} right)left(VDV^{-1} right)cdots left(VDV^{-1} right)}_{n text{ times}} = VD^n V^{-1}$$
This is a Jordan decomposition, not an eigenvalue decomposition (nor does an eigenvalue decomposition with a diagonal $D$ exist for non-symmetric/non-Hermitian matrix $A$), because $$V^*V = begin{bmatrix} 1 & (5sqrt{2})^{-1} \ (5sqrt{2})^{-1} & 1 end{bmatrix} ne {rm I}_2.$$
– Vedran Šego
Dec 8 '13 at 3:28
add a comment |
Perform an eigenvalue decomposition of $A$, we then get
$$A =
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}
begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}^{-1}
=VDV^{-1}
$$
where $V = begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}$ and $D = begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}$.
Hence,
$$A^n = underbrace{left(VDV^{-1} right)left(VDV^{-1} right)cdots left(VDV^{-1} right)}_{n text{ times}} = VD^n V^{-1}$$
Perform an eigenvalue decomposition of $A$, we then get
$$A =
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}
begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}
begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}^{-1}
=VDV^{-1}
$$
where $V = begin{bmatrix}
-4/5 & -1/sqrt2\
3/5 & -1/sqrt2
end{bmatrix}$ and $D = begin{bmatrix}
-2 & 0\
0 & 5
end{bmatrix}$.
Hence,
$$A^n = underbrace{left(VDV^{-1} right)left(VDV^{-1} right)cdots left(VDV^{-1} right)}_{n text{ times}} = VD^n V^{-1}$$
answered Dec 8 '13 at 2:35
user17762
This is a Jordan decomposition, not an eigenvalue decomposition (nor does an eigenvalue decomposition with a diagonal $D$ exist for non-symmetric/non-Hermitian matrix $A$), because $$V^*V = begin{bmatrix} 1 & (5sqrt{2})^{-1} \ (5sqrt{2})^{-1} & 1 end{bmatrix} ne {rm I}_2.$$
– Vedran Šego
Dec 8 '13 at 3:28
add a comment |
This is a Jordan decomposition, not an eigenvalue decomposition (nor does an eigenvalue decomposition with a diagonal $D$ exist for non-symmetric/non-Hermitian matrix $A$), because $$V^*V = begin{bmatrix} 1 & (5sqrt{2})^{-1} \ (5sqrt{2})^{-1} & 1 end{bmatrix} ne {rm I}_2.$$
– Vedran Šego
Dec 8 '13 at 3:28
This is a Jordan decomposition, not an eigenvalue decomposition (nor does an eigenvalue decomposition with a diagonal $D$ exist for non-symmetric/non-Hermitian matrix $A$), because $$V^*V = begin{bmatrix} 1 & (5sqrt{2})^{-1} \ (5sqrt{2})^{-1} & 1 end{bmatrix} ne {rm I}_2.$$
– Vedran Šego
Dec 8 '13 at 3:28
This is a Jordan decomposition, not an eigenvalue decomposition (nor does an eigenvalue decomposition with a diagonal $D$ exist for non-symmetric/non-Hermitian matrix $A$), because $$V^*V = begin{bmatrix} 1 & (5sqrt{2})^{-1} \ (5sqrt{2})^{-1} & 1 end{bmatrix} ne {rm I}_2.$$
– Vedran Šego
Dec 8 '13 at 3:28
add a comment |
$newcommand{+}{^{dagger}}%
newcommand{angles}[1]{leftlangle #1 rightrangle}%
newcommand{braces}[1]{leftlbrace #1 rightrbrace}%
newcommand{bracks}[1]{leftlbrack #1 rightrbrack}%
newcommand{ceil}[1]{,leftlceil #1 rightrceil,}%
newcommand{dd}{{rm d}}%
newcommand{ds}[1]{displaystyle{#1}}%
newcommand{equalby}[1]{{#1 atop {= atop vphantom{huge A}}}}%
newcommand{expo}[1]{,{rm e}^{#1},}%
newcommand{fermi}{,{rm f}}%
newcommand{floor}[1]{,leftlfloor #1 rightrfloor,}%
newcommand{half}{{1 over 2}}%
newcommand{ic}{{rm i}}%
newcommand{iff}{Longleftrightarrow}
newcommand{imp}{Longrightarrow}%
newcommand{isdiv}{,left.rightvert,}%
newcommand{ket}[1]{leftvert #1rightrangle}%
newcommand{ol}[1]{overline{#1}}%
newcommand{pars}[1]{left( #1 right)}%
newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
newcommand{pp}{{cal P}}%
newcommand{root}[2]{,sqrt[#1]{,#2,},}%
newcommand{sech}{,{rm sech}}%
newcommand{sgn}{,{rm sgn}}%
newcommand{totald}[3]{frac{{rm d}^{#1} #2}{{rm d} #3^{#1}}}
newcommand{ul}[1]{underline{#1}}%
newcommand{verts}[1]{leftvert, #1 ,rightvert}$
$$
A = pars{begin{array}{cc}1 & 4 \ 3 & 2end{array}}
=
{3 over 2}
overbrace{pars{begin{array}{cc}1 & 0 \ 0 & 1end{array}}}^{ds{1}}
+
overbrace{{7 over 2}}^{ds{b_{x}}}
overbrace{pars{begin{array}{cc}0 & 1 \ 1 & 0end{array}}}^{ds{sigma_{x}}}
+
overbrace{half,ic}^{ds{b_{y}}}
overbrace{pars{begin{array}{cc}0 & -ic \ ic & 0end{array}}}^{ds{sigma_{y}}}
overbrace{-
half}^{ds{b_{z}}}overbrace{pars{begin{array}{cc}1 & 0 \ 1 & -1end{array}}}^{ds{sigma_{z}}}
=
{3 over 2} + vec{b}cdotvec{sigma}
$$
where $braces{sigma_{ell},, ell = x, y, z}$ are the
Pauli matrices. Then,
$expo{At} = expo{3t/2}expo{vec{b}cdotvec{sigma}t}$. However, $vec{b}cdotvec{sigma}vec{b}cdotvec{sigma} = vec{b}cdotvec{b} = 49/4$ such that
$$
pars{totald[2]{}{t} - {49 over 4}}expo{vec{b}cdotvec{sigma}t} = 0
quadimpquad
expo{vec{b}cdotvec{sigma}t} = muexpo{7t/2} + nuexpo{-7t/2},,quad
mu, nu mbox{are constants},
$$
with $1 = mu + nu$ and $2vec{b}cdotvec{sigma}/7 = mu - nu$ such that
$mu = 1/2 + vec{b}cdotvec{sigma}/7$ and
$nu = 1/2 - vec{b}cdotvec{sigma}/7$:
$$
expo{At} = muexpo{5t} + nuexpo{-2t},,
quadsum_{n = 0}^{infty}{t^{n} over n!},A^{n}
=
sum_{n = 0}^{infty}{t^{n} over n!},
bracks{5^{n}mu + pars{-1}^{n}2^{n}nu}
$$
$$color{#0000ff}{large%
leftlbrace%
begin{array}{rcl}
A^{n} = 5^{n}mu + pars{-1}^{n}2^{n}nu
& = &
halfbracks{5^{n} + pars{-1}^{n}2^{n}}
+
{1 over 7}bracks{5^{n} - pars{-1}^{n}2^{n}}vec{b}cdotvec{sigma}
\[3mm]
vec{b}cdotvec{sigma}
& = &
A - {3 over 2}
=
pars{begin{array}{cc}-1/2 & 5/2 \ 3/2 & 1/2end{array}}
=
halfpars{begin{array}{cc}-1 & 5 \ 3 & 1end{array}}
end{array}right.}
$$
add a comment |
$newcommand{+}{^{dagger}}%
newcommand{angles}[1]{leftlangle #1 rightrangle}%
newcommand{braces}[1]{leftlbrace #1 rightrbrace}%
newcommand{bracks}[1]{leftlbrack #1 rightrbrack}%
newcommand{ceil}[1]{,leftlceil #1 rightrceil,}%
newcommand{dd}{{rm d}}%
newcommand{ds}[1]{displaystyle{#1}}%
newcommand{equalby}[1]{{#1 atop {= atop vphantom{huge A}}}}%
newcommand{expo}[1]{,{rm e}^{#1},}%
newcommand{fermi}{,{rm f}}%
newcommand{floor}[1]{,leftlfloor #1 rightrfloor,}%
newcommand{half}{{1 over 2}}%
newcommand{ic}{{rm i}}%
newcommand{iff}{Longleftrightarrow}
newcommand{imp}{Longrightarrow}%
newcommand{isdiv}{,left.rightvert,}%
newcommand{ket}[1]{leftvert #1rightrangle}%
newcommand{ol}[1]{overline{#1}}%
newcommand{pars}[1]{left( #1 right)}%
newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
newcommand{pp}{{cal P}}%
newcommand{root}[2]{,sqrt[#1]{,#2,},}%
newcommand{sech}{,{rm sech}}%
newcommand{sgn}{,{rm sgn}}%
newcommand{totald}[3]{frac{{rm d}^{#1} #2}{{rm d} #3^{#1}}}
newcommand{ul}[1]{underline{#1}}%
newcommand{verts}[1]{leftvert, #1 ,rightvert}$
$$
A = pars{begin{array}{cc}1 & 4 \ 3 & 2end{array}}
=
{3 over 2}
overbrace{pars{begin{array}{cc}1 & 0 \ 0 & 1end{array}}}^{ds{1}}
+
overbrace{{7 over 2}}^{ds{b_{x}}}
overbrace{pars{begin{array}{cc}0 & 1 \ 1 & 0end{array}}}^{ds{sigma_{x}}}
+
overbrace{half,ic}^{ds{b_{y}}}
overbrace{pars{begin{array}{cc}0 & -ic \ ic & 0end{array}}}^{ds{sigma_{y}}}
overbrace{-
half}^{ds{b_{z}}}overbrace{pars{begin{array}{cc}1 & 0 \ 1 & -1end{array}}}^{ds{sigma_{z}}}
=
{3 over 2} + vec{b}cdotvec{sigma}
$$
where $braces{sigma_{ell},, ell = x, y, z}$ are the
Pauli matrices. Then,
$expo{At} = expo{3t/2}expo{vec{b}cdotvec{sigma}t}$. However, $vec{b}cdotvec{sigma}vec{b}cdotvec{sigma} = vec{b}cdotvec{b} = 49/4$ such that
$$
pars{totald[2]{}{t} - {49 over 4}}expo{vec{b}cdotvec{sigma}t} = 0
quadimpquad
expo{vec{b}cdotvec{sigma}t} = muexpo{7t/2} + nuexpo{-7t/2},,quad
mu, nu mbox{are constants},
$$
with $1 = mu + nu$ and $2vec{b}cdotvec{sigma}/7 = mu - nu$ such that
$mu = 1/2 + vec{b}cdotvec{sigma}/7$ and
$nu = 1/2 - vec{b}cdotvec{sigma}/7$:
$$
expo{At} = muexpo{5t} + nuexpo{-2t},,
quadsum_{n = 0}^{infty}{t^{n} over n!},A^{n}
=
sum_{n = 0}^{infty}{t^{n} over n!},
bracks{5^{n}mu + pars{-1}^{n}2^{n}nu}
$$
$$color{#0000ff}{large%
leftlbrace%
begin{array}{rcl}
A^{n} = 5^{n}mu + pars{-1}^{n}2^{n}nu
& = &
halfbracks{5^{n} + pars{-1}^{n}2^{n}}
+
{1 over 7}bracks{5^{n} - pars{-1}^{n}2^{n}}vec{b}cdotvec{sigma}
\[3mm]
vec{b}cdotvec{sigma}
& = &
A - {3 over 2}
=
pars{begin{array}{cc}-1/2 & 5/2 \ 3/2 & 1/2end{array}}
=
halfpars{begin{array}{cc}-1 & 5 \ 3 & 1end{array}}
end{array}right.}
$$
add a comment |
$newcommand{+}{^{dagger}}%
newcommand{angles}[1]{leftlangle #1 rightrangle}%
newcommand{braces}[1]{leftlbrace #1 rightrbrace}%
newcommand{bracks}[1]{leftlbrack #1 rightrbrack}%
newcommand{ceil}[1]{,leftlceil #1 rightrceil,}%
newcommand{dd}{{rm d}}%
newcommand{ds}[1]{displaystyle{#1}}%
newcommand{equalby}[1]{{#1 atop {= atop vphantom{huge A}}}}%
newcommand{expo}[1]{,{rm e}^{#1},}%
newcommand{fermi}{,{rm f}}%
newcommand{floor}[1]{,leftlfloor #1 rightrfloor,}%
newcommand{half}{{1 over 2}}%
newcommand{ic}{{rm i}}%
newcommand{iff}{Longleftrightarrow}
newcommand{imp}{Longrightarrow}%
newcommand{isdiv}{,left.rightvert,}%
newcommand{ket}[1]{leftvert #1rightrangle}%
newcommand{ol}[1]{overline{#1}}%
newcommand{pars}[1]{left( #1 right)}%
newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
newcommand{pp}{{cal P}}%
newcommand{root}[2]{,sqrt[#1]{,#2,},}%
newcommand{sech}{,{rm sech}}%
newcommand{sgn}{,{rm sgn}}%
newcommand{totald}[3]{frac{{rm d}^{#1} #2}{{rm d} #3^{#1}}}
newcommand{ul}[1]{underline{#1}}%
newcommand{verts}[1]{leftvert, #1 ,rightvert}$
$$
A = pars{begin{array}{cc}1 & 4 \ 3 & 2end{array}}
=
{3 over 2}
overbrace{pars{begin{array}{cc}1 & 0 \ 0 & 1end{array}}}^{ds{1}}
+
overbrace{{7 over 2}}^{ds{b_{x}}}
overbrace{pars{begin{array}{cc}0 & 1 \ 1 & 0end{array}}}^{ds{sigma_{x}}}
+
overbrace{half,ic}^{ds{b_{y}}}
overbrace{pars{begin{array}{cc}0 & -ic \ ic & 0end{array}}}^{ds{sigma_{y}}}
overbrace{-
half}^{ds{b_{z}}}overbrace{pars{begin{array}{cc}1 & 0 \ 1 & -1end{array}}}^{ds{sigma_{z}}}
=
{3 over 2} + vec{b}cdotvec{sigma}
$$
where $braces{sigma_{ell},, ell = x, y, z}$ are the
Pauli matrices. Then,
$expo{At} = expo{3t/2}expo{vec{b}cdotvec{sigma}t}$. However, $vec{b}cdotvec{sigma}vec{b}cdotvec{sigma} = vec{b}cdotvec{b} = 49/4$ such that
$$
pars{totald[2]{}{t} - {49 over 4}}expo{vec{b}cdotvec{sigma}t} = 0
quadimpquad
expo{vec{b}cdotvec{sigma}t} = muexpo{7t/2} + nuexpo{-7t/2},,quad
mu, nu mbox{are constants},
$$
with $1 = mu + nu$ and $2vec{b}cdotvec{sigma}/7 = mu - nu$ such that
$mu = 1/2 + vec{b}cdotvec{sigma}/7$ and
$nu = 1/2 - vec{b}cdotvec{sigma}/7$:
$$
expo{At} = muexpo{5t} + nuexpo{-2t},,
quadsum_{n = 0}^{infty}{t^{n} over n!},A^{n}
=
sum_{n = 0}^{infty}{t^{n} over n!},
bracks{5^{n}mu + pars{-1}^{n}2^{n}nu}
$$
$$color{#0000ff}{large%
leftlbrace%
begin{array}{rcl}
A^{n} = 5^{n}mu + pars{-1}^{n}2^{n}nu
& = &
halfbracks{5^{n} + pars{-1}^{n}2^{n}}
+
{1 over 7}bracks{5^{n} - pars{-1}^{n}2^{n}}vec{b}cdotvec{sigma}
\[3mm]
vec{b}cdotvec{sigma}
& = &
A - {3 over 2}
=
pars{begin{array}{cc}-1/2 & 5/2 \ 3/2 & 1/2end{array}}
=
halfpars{begin{array}{cc}-1 & 5 \ 3 & 1end{array}}
end{array}right.}
$$
$newcommand{+}{^{dagger}}%
newcommand{angles}[1]{leftlangle #1 rightrangle}%
newcommand{braces}[1]{leftlbrace #1 rightrbrace}%
newcommand{bracks}[1]{leftlbrack #1 rightrbrack}%
newcommand{ceil}[1]{,leftlceil #1 rightrceil,}%
newcommand{dd}{{rm d}}%
newcommand{ds}[1]{displaystyle{#1}}%
newcommand{equalby}[1]{{#1 atop {= atop vphantom{huge A}}}}%
newcommand{expo}[1]{,{rm e}^{#1},}%
newcommand{fermi}{,{rm f}}%
newcommand{floor}[1]{,leftlfloor #1 rightrfloor,}%
newcommand{half}{{1 over 2}}%
newcommand{ic}{{rm i}}%
newcommand{iff}{Longleftrightarrow}
newcommand{imp}{Longrightarrow}%
newcommand{isdiv}{,left.rightvert,}%
newcommand{ket}[1]{leftvert #1rightrangle}%
newcommand{ol}[1]{overline{#1}}%
newcommand{pars}[1]{left( #1 right)}%
newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
newcommand{pp}{{cal P}}%
newcommand{root}[2]{,sqrt[#1]{,#2,},}%
newcommand{sech}{,{rm sech}}%
newcommand{sgn}{,{rm sgn}}%
newcommand{totald}[3]{frac{{rm d}^{#1} #2}{{rm d} #3^{#1}}}
newcommand{ul}[1]{underline{#1}}%
newcommand{verts}[1]{leftvert, #1 ,rightvert}$
$$
A = pars{begin{array}{cc}1 & 4 \ 3 & 2end{array}}
=
{3 over 2}
overbrace{pars{begin{array}{cc}1 & 0 \ 0 & 1end{array}}}^{ds{1}}
+
overbrace{{7 over 2}}^{ds{b_{x}}}
overbrace{pars{begin{array}{cc}0 & 1 \ 1 & 0end{array}}}^{ds{sigma_{x}}}
+
overbrace{half,ic}^{ds{b_{y}}}
overbrace{pars{begin{array}{cc}0 & -ic \ ic & 0end{array}}}^{ds{sigma_{y}}}
overbrace{-
half}^{ds{b_{z}}}overbrace{pars{begin{array}{cc}1 & 0 \ 1 & -1end{array}}}^{ds{sigma_{z}}}
=
{3 over 2} + vec{b}cdotvec{sigma}
$$
where $braces{sigma_{ell},, ell = x, y, z}$ are the
Pauli matrices. Then,
$expo{At} = expo{3t/2}expo{vec{b}cdotvec{sigma}t}$. However, $vec{b}cdotvec{sigma}vec{b}cdotvec{sigma} = vec{b}cdotvec{b} = 49/4$ such that
$$
pars{totald[2]{}{t} - {49 over 4}}expo{vec{b}cdotvec{sigma}t} = 0
quadimpquad
expo{vec{b}cdotvec{sigma}t} = muexpo{7t/2} + nuexpo{-7t/2},,quad
mu, nu mbox{are constants},
$$
with $1 = mu + nu$ and $2vec{b}cdotvec{sigma}/7 = mu - nu$ such that
$mu = 1/2 + vec{b}cdotvec{sigma}/7$ and
$nu = 1/2 - vec{b}cdotvec{sigma}/7$:
$$
expo{At} = muexpo{5t} + nuexpo{-2t},,
quadsum_{n = 0}^{infty}{t^{n} over n!},A^{n}
=
sum_{n = 0}^{infty}{t^{n} over n!},
bracks{5^{n}mu + pars{-1}^{n}2^{n}nu}
$$
$$color{#0000ff}{large%
leftlbrace%
begin{array}{rcl}
A^{n} = 5^{n}mu + pars{-1}^{n}2^{n}nu
& = &
halfbracks{5^{n} + pars{-1}^{n}2^{n}}
+
{1 over 7}bracks{5^{n} - pars{-1}^{n}2^{n}}vec{b}cdotvec{sigma}
\[3mm]
vec{b}cdotvec{sigma}
& = &
A - {3 over 2}
=
pars{begin{array}{cc}-1/2 & 5/2 \ 3/2 & 1/2end{array}}
=
halfpars{begin{array}{cc}-1 & 5 \ 3 & 1end{array}}
end{array}right.}
$$
edited Dec 8 '13 at 5:26
answered Dec 8 '13 at 4:04
Felix Marin
67.2k7107141
67.2k7107141
add a comment |
add a comment |
Two additional methods that you can use once you know the eigenvalues $lambda_1$ and $lambda_2$:
- Decompose $A$ into $lambda_1P_1+lambda_2P_2$, where $P_1 = {A-lambda_2Ioverlambda_2-lambda_1}$ and $P_2={A-lambda_1Ioverlambda_1-lambda_2}$ are projections onto the corresponding eigenspaces with $P_1P_2=P_2P_1=0$. If you expand $A^{1000}$ using the binomial theorem, you’ll find that all but two terms vanish, giving $lambda_1^{1000}P_1+lambda_2^{1000}P_2$.
- Use the Cayley-Hamilton theorem to write $A^{1000}=aI+bA$ for some undetermined coefficients $a$ and $b$, then use the fact that this equation is also satisfied by the eigenvalues, which gives you the system of linear equations $a+blambda_i=lambda_i^{100}$ to solve for $a$ and $b$.
When $A$ has repeated eigenvalues, you’ll need to modify the above methods a bit, but the underlying ideas are still the same.
add a comment |
Two additional methods that you can use once you know the eigenvalues $lambda_1$ and $lambda_2$:
- Decompose $A$ into $lambda_1P_1+lambda_2P_2$, where $P_1 = {A-lambda_2Ioverlambda_2-lambda_1}$ and $P_2={A-lambda_1Ioverlambda_1-lambda_2}$ are projections onto the corresponding eigenspaces with $P_1P_2=P_2P_1=0$. If you expand $A^{1000}$ using the binomial theorem, you’ll find that all but two terms vanish, giving $lambda_1^{1000}P_1+lambda_2^{1000}P_2$.
- Use the Cayley-Hamilton theorem to write $A^{1000}=aI+bA$ for some undetermined coefficients $a$ and $b$, then use the fact that this equation is also satisfied by the eigenvalues, which gives you the system of linear equations $a+blambda_i=lambda_i^{100}$ to solve for $a$ and $b$.
When $A$ has repeated eigenvalues, you’ll need to modify the above methods a bit, but the underlying ideas are still the same.
add a comment |
Two additional methods that you can use once you know the eigenvalues $lambda_1$ and $lambda_2$:
- Decompose $A$ into $lambda_1P_1+lambda_2P_2$, where $P_1 = {A-lambda_2Ioverlambda_2-lambda_1}$ and $P_2={A-lambda_1Ioverlambda_1-lambda_2}$ are projections onto the corresponding eigenspaces with $P_1P_2=P_2P_1=0$. If you expand $A^{1000}$ using the binomial theorem, you’ll find that all but two terms vanish, giving $lambda_1^{1000}P_1+lambda_2^{1000}P_2$.
- Use the Cayley-Hamilton theorem to write $A^{1000}=aI+bA$ for some undetermined coefficients $a$ and $b$, then use the fact that this equation is also satisfied by the eigenvalues, which gives you the system of linear equations $a+blambda_i=lambda_i^{100}$ to solve for $a$ and $b$.
When $A$ has repeated eigenvalues, you’ll need to modify the above methods a bit, but the underlying ideas are still the same.
Two additional methods that you can use once you know the eigenvalues $lambda_1$ and $lambda_2$:
- Decompose $A$ into $lambda_1P_1+lambda_2P_2$, where $P_1 = {A-lambda_2Ioverlambda_2-lambda_1}$ and $P_2={A-lambda_1Ioverlambda_1-lambda_2}$ are projections onto the corresponding eigenspaces with $P_1P_2=P_2P_1=0$. If you expand $A^{1000}$ using the binomial theorem, you’ll find that all but two terms vanish, giving $lambda_1^{1000}P_1+lambda_2^{1000}P_2$.
- Use the Cayley-Hamilton theorem to write $A^{1000}=aI+bA$ for some undetermined coefficients $a$ and $b$, then use the fact that this equation is also satisfied by the eigenvalues, which gives you the system of linear equations $a+blambda_i=lambda_i^{100}$ to solve for $a$ and $b$.
When $A$ has repeated eigenvalues, you’ll need to modify the above methods a bit, but the underlying ideas are still the same.
answered 2 days ago
amd
29.3k21050
29.3k21050
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f597602%2ffinding-a-2x2-matrix-raised-to-the-power-of-1000%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
5
Eigenvalues should do it. Does a formula exist? Probably, but it would just be the process of diagonalizing, written out in one line.
– Eric Stucky
Dec 8 '13 at 2:17
Unless there is a pattern specific to the matrix, usually one has to the diagonalize it.
– David H
Dec 8 '13 at 2:18
Abstract duplicate? of math.stackexchange.com/questions/55285/…
– Eric Stucky
Dec 8 '13 at 2:23