Proof that the dimension of a matrix row space is equal to the dimension of its column space












2












$begingroup$


I have the following theorem:



Theorem 3.12. Let A be an m n matrix. Then the dimension of its row
space is equal to the dimension of its column space.



And the following proof is given:




Proof. Suppose that $lbrace v_1,v_2,dots,v_krbrace$ is a basis for the column space of $A$. Then each column of $A$ can be expressed as a linear combination of these vectors; suppose that the $i$-th column $c_i$ is given by $$c_i = gamma_{1i}v_1+gamma_{2i}v_2+dots+gamma_{ki}v_k$$ Now form two matrices as follows: $B$ is an $mtimes k$ matrix whose columns are the basis vectors $v_i$, while $C=(gamma_{ij})$ is a $ktimes n$ matrix whose $i$-th column contains the coefficients $gamma_{1i},gamma_{2i,}dots,gamma_{ki}$. It then follows$^7$ that $A=BC$.



However, we can also view the product $A= BC$ as expressing the rows of $A$ as a linear combination of the rows of $C$ with the $i$-th row of $B$ giving the coefficients for the linear combination that determines the $i$-th row of $A$. Therefore, the rows of $C$ are a spanning set for the row space of $A$, and so the dimension of the row space of $A$ is at most $k$. We conclude that: $$dim(operatorname{rowsp}(A))leqdim(operatorname{colsp}(A))$$
Applying the same argument to $A^t$, we conclude that:$$dim(operatorname{colsp}(A))leqdim(operatorname{rowsp}(A))$$and hence these values are equal




However, I am finding this proof impossible to follow and understand. Can someone please offer an alternative proof or explain what this proof is saying?



Thank you.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Perhaps it'll be better if someone simply clarifies the proof I posted up? A detailed walk-through of each step would be nice. :)
    $endgroup$
    – The Pointer
    Aug 23 '16 at 6:45


















2












$begingroup$


I have the following theorem:



Theorem 3.12. Let A be an m n matrix. Then the dimension of its row
space is equal to the dimension of its column space.



And the following proof is given:




Proof. Suppose that $lbrace v_1,v_2,dots,v_krbrace$ is a basis for the column space of $A$. Then each column of $A$ can be expressed as a linear combination of these vectors; suppose that the $i$-th column $c_i$ is given by $$c_i = gamma_{1i}v_1+gamma_{2i}v_2+dots+gamma_{ki}v_k$$ Now form two matrices as follows: $B$ is an $mtimes k$ matrix whose columns are the basis vectors $v_i$, while $C=(gamma_{ij})$ is a $ktimes n$ matrix whose $i$-th column contains the coefficients $gamma_{1i},gamma_{2i,}dots,gamma_{ki}$. It then follows$^7$ that $A=BC$.



However, we can also view the product $A= BC$ as expressing the rows of $A$ as a linear combination of the rows of $C$ with the $i$-th row of $B$ giving the coefficients for the linear combination that determines the $i$-th row of $A$. Therefore, the rows of $C$ are a spanning set for the row space of $A$, and so the dimension of the row space of $A$ is at most $k$. We conclude that: $$dim(operatorname{rowsp}(A))leqdim(operatorname{colsp}(A))$$
Applying the same argument to $A^t$, we conclude that:$$dim(operatorname{colsp}(A))leqdim(operatorname{rowsp}(A))$$and hence these values are equal




However, I am finding this proof impossible to follow and understand. Can someone please offer an alternative proof or explain what this proof is saying?



Thank you.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Perhaps it'll be better if someone simply clarifies the proof I posted up? A detailed walk-through of each step would be nice. :)
    $endgroup$
    – The Pointer
    Aug 23 '16 at 6:45
















2












2








2


3



$begingroup$


I have the following theorem:



Theorem 3.12. Let A be an m n matrix. Then the dimension of its row
space is equal to the dimension of its column space.



And the following proof is given:




Proof. Suppose that $lbrace v_1,v_2,dots,v_krbrace$ is a basis for the column space of $A$. Then each column of $A$ can be expressed as a linear combination of these vectors; suppose that the $i$-th column $c_i$ is given by $$c_i = gamma_{1i}v_1+gamma_{2i}v_2+dots+gamma_{ki}v_k$$ Now form two matrices as follows: $B$ is an $mtimes k$ matrix whose columns are the basis vectors $v_i$, while $C=(gamma_{ij})$ is a $ktimes n$ matrix whose $i$-th column contains the coefficients $gamma_{1i},gamma_{2i,}dots,gamma_{ki}$. It then follows$^7$ that $A=BC$.



However, we can also view the product $A= BC$ as expressing the rows of $A$ as a linear combination of the rows of $C$ with the $i$-th row of $B$ giving the coefficients for the linear combination that determines the $i$-th row of $A$. Therefore, the rows of $C$ are a spanning set for the row space of $A$, and so the dimension of the row space of $A$ is at most $k$. We conclude that: $$dim(operatorname{rowsp}(A))leqdim(operatorname{colsp}(A))$$
Applying the same argument to $A^t$, we conclude that:$$dim(operatorname{colsp}(A))leqdim(operatorname{rowsp}(A))$$and hence these values are equal




However, I am finding this proof impossible to follow and understand. Can someone please offer an alternative proof or explain what this proof is saying?



Thank you.










share|cite|improve this question











$endgroup$




I have the following theorem:



Theorem 3.12. Let A be an m n matrix. Then the dimension of its row
space is equal to the dimension of its column space.



And the following proof is given:




Proof. Suppose that $lbrace v_1,v_2,dots,v_krbrace$ is a basis for the column space of $A$. Then each column of $A$ can be expressed as a linear combination of these vectors; suppose that the $i$-th column $c_i$ is given by $$c_i = gamma_{1i}v_1+gamma_{2i}v_2+dots+gamma_{ki}v_k$$ Now form two matrices as follows: $B$ is an $mtimes k$ matrix whose columns are the basis vectors $v_i$, while $C=(gamma_{ij})$ is a $ktimes n$ matrix whose $i$-th column contains the coefficients $gamma_{1i},gamma_{2i,}dots,gamma_{ki}$. It then follows$^7$ that $A=BC$.



However, we can also view the product $A= BC$ as expressing the rows of $A$ as a linear combination of the rows of $C$ with the $i$-th row of $B$ giving the coefficients for the linear combination that determines the $i$-th row of $A$. Therefore, the rows of $C$ are a spanning set for the row space of $A$, and so the dimension of the row space of $A$ is at most $k$. We conclude that: $$dim(operatorname{rowsp}(A))leqdim(operatorname{colsp}(A))$$
Applying the same argument to $A^t$, we conclude that:$$dim(operatorname{colsp}(A))leqdim(operatorname{rowsp}(A))$$and hence these values are equal




However, I am finding this proof impossible to follow and understand. Can someone please offer an alternative proof or explain what this proof is saying?



Thank you.







linear-algebra






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Aug 23 '16 at 2:45









Mark

5,780627




5,780627










asked Aug 22 '16 at 20:56









The PointerThe Pointer

2,60421436




2,60421436












  • $begingroup$
    Perhaps it'll be better if someone simply clarifies the proof I posted up? A detailed walk-through of each step would be nice. :)
    $endgroup$
    – The Pointer
    Aug 23 '16 at 6:45




















  • $begingroup$
    Perhaps it'll be better if someone simply clarifies the proof I posted up? A detailed walk-through of each step would be nice. :)
    $endgroup$
    – The Pointer
    Aug 23 '16 at 6:45


















$begingroup$
Perhaps it'll be better if someone simply clarifies the proof I posted up? A detailed walk-through of each step would be nice. :)
$endgroup$
– The Pointer
Aug 23 '16 at 6:45






$begingroup$
Perhaps it'll be better if someone simply clarifies the proof I posted up? A detailed walk-through of each step would be nice. :)
$endgroup$
– The Pointer
Aug 23 '16 at 6:45












2 Answers
2






active

oldest

votes


















2












$begingroup$

You can consideradas the next explanation also for the fact that the row dimensión of a Matrix equals the column dimensión of a matrix. For that I will use what it's called the rank of a Matrix.



The rank $r$ of a Matrix can be defines as the number of non-zero singular values of the Matrix, So applying the singular value decomposition of the matrix, we get $A=USigma V^T$. This implies that the range $dim(R(A))=r$, as the range of a is spanned by the first r columns of U. We know that the range of A is defined as the subspace spanned by the columns of A, so the dimension of it will be r.



If we take the transpose of the Matrix and compute it's svd, we ser that $A^T=VSigma^T U^T$, and as the Sigma Matrix remains the same number of non-zero elements as the one for A, the rank of this Matrix will still be r. So as done for A, the dimension for the range of $A^T$ is equal to r too, but as the range of $A^T$ id the row space of A, we conclude that the dimension for both spaces must be the same and equal to the range of the Matrix A.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks but I don't quite understand this explanation either. I'm not familiar with terms such as 'singular value decomposition'.
    $endgroup$
    – The Pointer
    Aug 22 '16 at 21:27






  • 1




    $begingroup$
    What the proof is basically saying is the next thing: it first takes the columns of the matrix $A$ and sees that there are k linearly independent columns, implying that all the other can be written as a linear combination of those k vectors. Then the columns of $A$ are expressed as the matricial product $BC$, where B is just the matrix whose columns are the $k-basis$ of the columns and $C$ is the matrix containing the coefficients necessary to create the columns of $A$ from the $k-basis$ vector. Then the reasoning is based that, as a matricial product can be seen as many diferent...
    $endgroup$
    – Josu Etxezarreta Martinez
    Aug 23 '16 at 8:11






  • 1




    $begingroup$
    ways for doing (see outer and inner product en.wikipedia.org/wiki/Matrix_multiplication), then it can be seen that the rows of A are spanned by the rows of your recently generated matrix C with the coefficients given by B. This implies that, as C is kxn, the dimension of the row space is as maximum k, implying that it has to be less or equal than the column space dimension. Finally, if you apply all of this reasoning to the $A^T$, you get the same result as before, but as the rows of A are the columns of $A^T$, you can get the last inequality...
    $endgroup$
    – Josu Etxezarreta Martinez
    Aug 23 '16 at 8:15










  • $begingroup$
    As both of the inequalities are true, that implies that both of dimensions have to be the same. Hope that this helps!
    $endgroup$
    – Josu Etxezarreta Martinez
    Aug 23 '16 at 8:16










  • $begingroup$
    Got it! Thank you. :)
    $endgroup$
    – The Pointer
    Aug 23 '16 at 17:08



















0












$begingroup$

Sorry to revive an old thread. The reason I'm adding a new answer, with the same proof structure as the quoted proof, is just because I struggled too with this, and thought that writing a detailed formal proof would help me (and hopefully others) understand the proof step by step. Any feedback is welcome.



Anyway, here is the proof.








Proof



Let $Ainmathcal{M}_{mtimes n}(mathbb{K})$, where $mathbb{K}$ is any field. Let ${boldsymbol{v^1},ldots, boldsymbol{v^k} }$ be a basis of the column space of $A$, where $kin{1,ldots,n}$. $boldsymbol{v^i}$ is a vector in $mathbb{K}^m$ (the vector space of $m$-tuples with entries in $mathbb{K}$) for all $iin{1,ldots,k}$. Thus each column in $A$ can be expressed as a linear combination of $boldsymbol{v^1},ldots,boldsymbol{v^k}$. That is, for every $jin{1,ldots,n}$ there exist unique coefficients $lambda_{1j},ldots,lambda_{nj}inmathbb{K}$ such that
$$
forall jin{1,ldots,n}qquad boldsymbol{u^j} = sum_{ell=1}^{k} lambda_{ell j} boldsymbol{v^ell},,
$$

where $boldsymbol{u^j}$ denotes the $j$-th column in $A$.






Let $Binmathcal{M}_{mtimes k}(mathbb{K})$ be the matrix with such vectors as columns:
$$
B =
begin{pmatrix}
vert & & vert \
boldsymbol{v^1} & cdots & boldsymbol{v^k} \
vert & & vert
end{pmatrix},.
$$

Let $[s]$ denote ${1,ldots,s}$ for all $sinmathbb{N}$. Let $Cinmathcal{M}_{ktimes n}(mathbb{K})$ be the matrix with the aforementioned coefficients:
$$
C = (lambda_{ij})_{(i,j)in[k]times[n]} =
begin{pmatrix}
lambda_{11} & cdots & lambda_{1n} \
vdots & ddots & vdots \
lambda_{k1} & cdots & lambda_{kn}
end{pmatrix},.
$$






Now consider the matrix product $BCinmathcal{M}_{mtimes n}(mathbb{K})$. Let $(bc)_{ij}$, $b_{ij}$ and $c_{ij}$ denote the $(i,j)$-th element of $BC$, $B$ and $C$ respectively. By definition of matrix product,
begin{equation}tag{1}label{foo}
(bc)_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n],.
end{equation}

Let us consider the $j$-th column of $BC$ for an arbitrary $jin[n]$. Let $v^ell_i$ denote the $i$-th component of $boldsymbol{v^ell}$ for all $ellin[k]$ and for all $iin[m]$.
begin{multline*}
left((bc)_{ij}right)_{iin[m]} = left( sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{iin[m]} =
begin{pmatrix}
sum_{ell=1}^{k} b_{1ell}c_{ell j} \
vdots \
sum_{ell=1}^{k} b_{mell}c_{ell j}
end{pmatrix} = \
%
=
begin{pmatrix}
sum_{ell=1}^{k} v^ell_1cdotlambda_{ell j} \
vdots \
sum_{ell=1}^{k} v^ell_mcdotlambda_{ell j}
end{pmatrix} = sum_{ell=1}^k lambda_{ell j} %
begin{pmatrix}
v^ell_1 \
vdots \
v^ell_m
end{pmatrix} = sum_{ell=1}^{k} lambda_{ell j}boldsymbol{v^ell} = boldsymbol{u^j},.
end{multline*}

Thus, the columns of $BC$ are the columns of $A$. Ergo, $A=BC$.






On the other hand, let us consider the $i$-th row of $A$, denoted by $boldsymbol{r^i}$. That is,
$$
boldsymbol{r^i} = (a_{ij})_{jin[n]} qquad forall iin[m],.
$$

Again, by the definition of matrix multiplication,
$$
a_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n]
$$

(this is the same equation found in eq. eqref{foo}). Thus,
begin{multline}tag{2}label{rows}
boldsymbol{r^i} = left(sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{jin[n]} =
begin{pmatrix}
sum_{ell=1}^{k} b_{iell}c_{ell 1} & cdots & sum_{ell=1}^{k} b_{iell}c_{ell n}
end{pmatrix} = \
=
begin{pmatrix}
sum_{ell=1}^{k} v^ell_icdotlambda_{ell 1} & cdots & sum_{ell=1}^{k} v^ell_icdotlambda_{ell n},.
end{pmatrix}
end{multline}

Now, let $boldsymbol{Lambda^ell}$ be the $ell$-th row of $C$ for all $ellin[k]$, as a row vector:
begin{equation*}
boldsymbol{Lambda^ell} =
begin{pmatrix}
lambda_{ell 1} & cdots & lambda_{ell n},.
end{pmatrix}
end{equation*}

Thus, with the same notation as before, ${Lambda^ell_i} = lambda_{ell i}$ for all $iin[n]$. Also, let $mu_{i ell}$ denote $v^ell_i$ for all $iin[m]$ and for all $ellin[k]$---this is merely a change of notation to remark the fact that $v^ell_i$ can be seen as ``coefficients''. Thus, continuing to develop equation eqref{rows}, we get
begin{multline*}
boldsymbol{r^i} =
begin{pmatrix}
sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_1 & cdots & sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_n
end{pmatrix} = \
=
sum_{ell=1}^k mu_{iell}
begin{pmatrix}
Lambda^ell_1 & cdots & Lambda^ell_n
end{pmatrix} =
sum_{ell=1}^k mu_{iell} boldsymbol{Lambda^ell},.
end{multline*}






Therefore, the rows of $A$ (i.e., $boldsymbol{r^i}$) are linear combinations of the rows of $C$ (i.e., $boldsymbol{Lambda^ell}$). Thus, we necessarily have
$$
mathrm{rowsp} A subseteq mathrm{rowsp} C implies dim (mathrm{rowsp} A) le dim (mathrm{rowsp} C),.
$$

Since $C$ has $k$ rows, its row space can have at most dimension $k$, which is the dimension of $mathrm{colsp} A$ (by hypothesis):
$$
dim(mathrm{rowsp} C) le k = dim(mathrm{colsp} A),.
$$

Combining both inequalities, we have
$$
dim (mathrm{rowsp} A) le dim (mathrm{colsp} A),.
$$






Applying this whole argument again on $A^mathrm{t}$,
$$
dim (mathrm{rowsp} A^mathrm{t}) le dim (mathrm{colsp} A^mathrm{t}) iff dim (mathrm{colsp} A) le dim (mathrm{rowsp} A),.
$$

Since we have both $dim (mathrm{rowsp} A) le dim (mathrm{colsp} A)$ and $dim (mathrm{colsp} A) le dim (mathrm{rowsp} A)$, we conclude that
$$
dim (mathrm{colsp} A) = dim (mathrm{rowsp} A),. quad square
$$






share|cite|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1900437%2fproof-that-the-dimension-of-a-matrix-row-space-is-equal-to-the-dimension-of-its%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2












    $begingroup$

    You can consideradas the next explanation also for the fact that the row dimensión of a Matrix equals the column dimensión of a matrix. For that I will use what it's called the rank of a Matrix.



    The rank $r$ of a Matrix can be defines as the number of non-zero singular values of the Matrix, So applying the singular value decomposition of the matrix, we get $A=USigma V^T$. This implies that the range $dim(R(A))=r$, as the range of a is spanned by the first r columns of U. We know that the range of A is defined as the subspace spanned by the columns of A, so the dimension of it will be r.



    If we take the transpose of the Matrix and compute it's svd, we ser that $A^T=VSigma^T U^T$, and as the Sigma Matrix remains the same number of non-zero elements as the one for A, the rank of this Matrix will still be r. So as done for A, the dimension for the range of $A^T$ is equal to r too, but as the range of $A^T$ id the row space of A, we conclude that the dimension for both spaces must be the same and equal to the range of the Matrix A.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Thanks but I don't quite understand this explanation either. I'm not familiar with terms such as 'singular value decomposition'.
      $endgroup$
      – The Pointer
      Aug 22 '16 at 21:27






    • 1




      $begingroup$
      What the proof is basically saying is the next thing: it first takes the columns of the matrix $A$ and sees that there are k linearly independent columns, implying that all the other can be written as a linear combination of those k vectors. Then the columns of $A$ are expressed as the matricial product $BC$, where B is just the matrix whose columns are the $k-basis$ of the columns and $C$ is the matrix containing the coefficients necessary to create the columns of $A$ from the $k-basis$ vector. Then the reasoning is based that, as a matricial product can be seen as many diferent...
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:11






    • 1




      $begingroup$
      ways for doing (see outer and inner product en.wikipedia.org/wiki/Matrix_multiplication), then it can be seen that the rows of A are spanned by the rows of your recently generated matrix C with the coefficients given by B. This implies that, as C is kxn, the dimension of the row space is as maximum k, implying that it has to be less or equal than the column space dimension. Finally, if you apply all of this reasoning to the $A^T$, you get the same result as before, but as the rows of A are the columns of $A^T$, you can get the last inequality...
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:15










    • $begingroup$
      As both of the inequalities are true, that implies that both of dimensions have to be the same. Hope that this helps!
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:16










    • $begingroup$
      Got it! Thank you. :)
      $endgroup$
      – The Pointer
      Aug 23 '16 at 17:08
















    2












    $begingroup$

    You can consideradas the next explanation also for the fact that the row dimensión of a Matrix equals the column dimensión of a matrix. For that I will use what it's called the rank of a Matrix.



    The rank $r$ of a Matrix can be defines as the number of non-zero singular values of the Matrix, So applying the singular value decomposition of the matrix, we get $A=USigma V^T$. This implies that the range $dim(R(A))=r$, as the range of a is spanned by the first r columns of U. We know that the range of A is defined as the subspace spanned by the columns of A, so the dimension of it will be r.



    If we take the transpose of the Matrix and compute it's svd, we ser that $A^T=VSigma^T U^T$, and as the Sigma Matrix remains the same number of non-zero elements as the one for A, the rank of this Matrix will still be r. So as done for A, the dimension for the range of $A^T$ is equal to r too, but as the range of $A^T$ id the row space of A, we conclude that the dimension for both spaces must be the same and equal to the range of the Matrix A.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Thanks but I don't quite understand this explanation either. I'm not familiar with terms such as 'singular value decomposition'.
      $endgroup$
      – The Pointer
      Aug 22 '16 at 21:27






    • 1




      $begingroup$
      What the proof is basically saying is the next thing: it first takes the columns of the matrix $A$ and sees that there are k linearly independent columns, implying that all the other can be written as a linear combination of those k vectors. Then the columns of $A$ are expressed as the matricial product $BC$, where B is just the matrix whose columns are the $k-basis$ of the columns and $C$ is the matrix containing the coefficients necessary to create the columns of $A$ from the $k-basis$ vector. Then the reasoning is based that, as a matricial product can be seen as many diferent...
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:11






    • 1




      $begingroup$
      ways for doing (see outer and inner product en.wikipedia.org/wiki/Matrix_multiplication), then it can be seen that the rows of A are spanned by the rows of your recently generated matrix C with the coefficients given by B. This implies that, as C is kxn, the dimension of the row space is as maximum k, implying that it has to be less or equal than the column space dimension. Finally, if you apply all of this reasoning to the $A^T$, you get the same result as before, but as the rows of A are the columns of $A^T$, you can get the last inequality...
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:15










    • $begingroup$
      As both of the inequalities are true, that implies that both of dimensions have to be the same. Hope that this helps!
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:16










    • $begingroup$
      Got it! Thank you. :)
      $endgroup$
      – The Pointer
      Aug 23 '16 at 17:08














    2












    2








    2





    $begingroup$

    You can consideradas the next explanation also for the fact that the row dimensión of a Matrix equals the column dimensión of a matrix. For that I will use what it's called the rank of a Matrix.



    The rank $r$ of a Matrix can be defines as the number of non-zero singular values of the Matrix, So applying the singular value decomposition of the matrix, we get $A=USigma V^T$. This implies that the range $dim(R(A))=r$, as the range of a is spanned by the first r columns of U. We know that the range of A is defined as the subspace spanned by the columns of A, so the dimension of it will be r.



    If we take the transpose of the Matrix and compute it's svd, we ser that $A^T=VSigma^T U^T$, and as the Sigma Matrix remains the same number of non-zero elements as the one for A, the rank of this Matrix will still be r. So as done for A, the dimension for the range of $A^T$ is equal to r too, but as the range of $A^T$ id the row space of A, we conclude that the dimension for both spaces must be the same and equal to the range of the Matrix A.






    share|cite|improve this answer









    $endgroup$



    You can consideradas the next explanation also for the fact that the row dimensión of a Matrix equals the column dimensión of a matrix. For that I will use what it's called the rank of a Matrix.



    The rank $r$ of a Matrix can be defines as the number of non-zero singular values of the Matrix, So applying the singular value decomposition of the matrix, we get $A=USigma V^T$. This implies that the range $dim(R(A))=r$, as the range of a is spanned by the first r columns of U. We know that the range of A is defined as the subspace spanned by the columns of A, so the dimension of it will be r.



    If we take the transpose of the Matrix and compute it's svd, we ser that $A^T=VSigma^T U^T$, and as the Sigma Matrix remains the same number of non-zero elements as the one for A, the rank of this Matrix will still be r. So as done for A, the dimension for the range of $A^T$ is equal to r too, but as the range of $A^T$ id the row space of A, we conclude that the dimension for both spaces must be the same and equal to the range of the Matrix A.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Aug 22 '16 at 21:18









    Josu Etxezarreta MartinezJosu Etxezarreta Martinez

    9641517




    9641517












    • $begingroup$
      Thanks but I don't quite understand this explanation either. I'm not familiar with terms such as 'singular value decomposition'.
      $endgroup$
      – The Pointer
      Aug 22 '16 at 21:27






    • 1




      $begingroup$
      What the proof is basically saying is the next thing: it first takes the columns of the matrix $A$ and sees that there are k linearly independent columns, implying that all the other can be written as a linear combination of those k vectors. Then the columns of $A$ are expressed as the matricial product $BC$, where B is just the matrix whose columns are the $k-basis$ of the columns and $C$ is the matrix containing the coefficients necessary to create the columns of $A$ from the $k-basis$ vector. Then the reasoning is based that, as a matricial product can be seen as many diferent...
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:11






    • 1




      $begingroup$
      ways for doing (see outer and inner product en.wikipedia.org/wiki/Matrix_multiplication), then it can be seen that the rows of A are spanned by the rows of your recently generated matrix C with the coefficients given by B. This implies that, as C is kxn, the dimension of the row space is as maximum k, implying that it has to be less or equal than the column space dimension. Finally, if you apply all of this reasoning to the $A^T$, you get the same result as before, but as the rows of A are the columns of $A^T$, you can get the last inequality...
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:15










    • $begingroup$
      As both of the inequalities are true, that implies that both of dimensions have to be the same. Hope that this helps!
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:16










    • $begingroup$
      Got it! Thank you. :)
      $endgroup$
      – The Pointer
      Aug 23 '16 at 17:08


















    • $begingroup$
      Thanks but I don't quite understand this explanation either. I'm not familiar with terms such as 'singular value decomposition'.
      $endgroup$
      – The Pointer
      Aug 22 '16 at 21:27






    • 1




      $begingroup$
      What the proof is basically saying is the next thing: it first takes the columns of the matrix $A$ and sees that there are k linearly independent columns, implying that all the other can be written as a linear combination of those k vectors. Then the columns of $A$ are expressed as the matricial product $BC$, where B is just the matrix whose columns are the $k-basis$ of the columns and $C$ is the matrix containing the coefficients necessary to create the columns of $A$ from the $k-basis$ vector. Then the reasoning is based that, as a matricial product can be seen as many diferent...
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:11






    • 1




      $begingroup$
      ways for doing (see outer and inner product en.wikipedia.org/wiki/Matrix_multiplication), then it can be seen that the rows of A are spanned by the rows of your recently generated matrix C with the coefficients given by B. This implies that, as C is kxn, the dimension of the row space is as maximum k, implying that it has to be less or equal than the column space dimension. Finally, if you apply all of this reasoning to the $A^T$, you get the same result as before, but as the rows of A are the columns of $A^T$, you can get the last inequality...
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:15










    • $begingroup$
      As both of the inequalities are true, that implies that both of dimensions have to be the same. Hope that this helps!
      $endgroup$
      – Josu Etxezarreta Martinez
      Aug 23 '16 at 8:16










    • $begingroup$
      Got it! Thank you. :)
      $endgroup$
      – The Pointer
      Aug 23 '16 at 17:08
















    $begingroup$
    Thanks but I don't quite understand this explanation either. I'm not familiar with terms such as 'singular value decomposition'.
    $endgroup$
    – The Pointer
    Aug 22 '16 at 21:27




    $begingroup$
    Thanks but I don't quite understand this explanation either. I'm not familiar with terms such as 'singular value decomposition'.
    $endgroup$
    – The Pointer
    Aug 22 '16 at 21:27




    1




    1




    $begingroup$
    What the proof is basically saying is the next thing: it first takes the columns of the matrix $A$ and sees that there are k linearly independent columns, implying that all the other can be written as a linear combination of those k vectors. Then the columns of $A$ are expressed as the matricial product $BC$, where B is just the matrix whose columns are the $k-basis$ of the columns and $C$ is the matrix containing the coefficients necessary to create the columns of $A$ from the $k-basis$ vector. Then the reasoning is based that, as a matricial product can be seen as many diferent...
    $endgroup$
    – Josu Etxezarreta Martinez
    Aug 23 '16 at 8:11




    $begingroup$
    What the proof is basically saying is the next thing: it first takes the columns of the matrix $A$ and sees that there are k linearly independent columns, implying that all the other can be written as a linear combination of those k vectors. Then the columns of $A$ are expressed as the matricial product $BC$, where B is just the matrix whose columns are the $k-basis$ of the columns and $C$ is the matrix containing the coefficients necessary to create the columns of $A$ from the $k-basis$ vector. Then the reasoning is based that, as a matricial product can be seen as many diferent...
    $endgroup$
    – Josu Etxezarreta Martinez
    Aug 23 '16 at 8:11




    1




    1




    $begingroup$
    ways for doing (see outer and inner product en.wikipedia.org/wiki/Matrix_multiplication), then it can be seen that the rows of A are spanned by the rows of your recently generated matrix C with the coefficients given by B. This implies that, as C is kxn, the dimension of the row space is as maximum k, implying that it has to be less or equal than the column space dimension. Finally, if you apply all of this reasoning to the $A^T$, you get the same result as before, but as the rows of A are the columns of $A^T$, you can get the last inequality...
    $endgroup$
    – Josu Etxezarreta Martinez
    Aug 23 '16 at 8:15




    $begingroup$
    ways for doing (see outer and inner product en.wikipedia.org/wiki/Matrix_multiplication), then it can be seen that the rows of A are spanned by the rows of your recently generated matrix C with the coefficients given by B. This implies that, as C is kxn, the dimension of the row space is as maximum k, implying that it has to be less or equal than the column space dimension. Finally, if you apply all of this reasoning to the $A^T$, you get the same result as before, but as the rows of A are the columns of $A^T$, you can get the last inequality...
    $endgroup$
    – Josu Etxezarreta Martinez
    Aug 23 '16 at 8:15












    $begingroup$
    As both of the inequalities are true, that implies that both of dimensions have to be the same. Hope that this helps!
    $endgroup$
    – Josu Etxezarreta Martinez
    Aug 23 '16 at 8:16




    $begingroup$
    As both of the inequalities are true, that implies that both of dimensions have to be the same. Hope that this helps!
    $endgroup$
    – Josu Etxezarreta Martinez
    Aug 23 '16 at 8:16












    $begingroup$
    Got it! Thank you. :)
    $endgroup$
    – The Pointer
    Aug 23 '16 at 17:08




    $begingroup$
    Got it! Thank you. :)
    $endgroup$
    – The Pointer
    Aug 23 '16 at 17:08











    0












    $begingroup$

    Sorry to revive an old thread. The reason I'm adding a new answer, with the same proof structure as the quoted proof, is just because I struggled too with this, and thought that writing a detailed formal proof would help me (and hopefully others) understand the proof step by step. Any feedback is welcome.



    Anyway, here is the proof.








    Proof



    Let $Ainmathcal{M}_{mtimes n}(mathbb{K})$, where $mathbb{K}$ is any field. Let ${boldsymbol{v^1},ldots, boldsymbol{v^k} }$ be a basis of the column space of $A$, where $kin{1,ldots,n}$. $boldsymbol{v^i}$ is a vector in $mathbb{K}^m$ (the vector space of $m$-tuples with entries in $mathbb{K}$) for all $iin{1,ldots,k}$. Thus each column in $A$ can be expressed as a linear combination of $boldsymbol{v^1},ldots,boldsymbol{v^k}$. That is, for every $jin{1,ldots,n}$ there exist unique coefficients $lambda_{1j},ldots,lambda_{nj}inmathbb{K}$ such that
    $$
    forall jin{1,ldots,n}qquad boldsymbol{u^j} = sum_{ell=1}^{k} lambda_{ell j} boldsymbol{v^ell},,
    $$

    where $boldsymbol{u^j}$ denotes the $j$-th column in $A$.






    Let $Binmathcal{M}_{mtimes k}(mathbb{K})$ be the matrix with such vectors as columns:
    $$
    B =
    begin{pmatrix}
    vert & & vert \
    boldsymbol{v^1} & cdots & boldsymbol{v^k} \
    vert & & vert
    end{pmatrix},.
    $$

    Let $[s]$ denote ${1,ldots,s}$ for all $sinmathbb{N}$. Let $Cinmathcal{M}_{ktimes n}(mathbb{K})$ be the matrix with the aforementioned coefficients:
    $$
    C = (lambda_{ij})_{(i,j)in[k]times[n]} =
    begin{pmatrix}
    lambda_{11} & cdots & lambda_{1n} \
    vdots & ddots & vdots \
    lambda_{k1} & cdots & lambda_{kn}
    end{pmatrix},.
    $$






    Now consider the matrix product $BCinmathcal{M}_{mtimes n}(mathbb{K})$. Let $(bc)_{ij}$, $b_{ij}$ and $c_{ij}$ denote the $(i,j)$-th element of $BC$, $B$ and $C$ respectively. By definition of matrix product,
    begin{equation}tag{1}label{foo}
    (bc)_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n],.
    end{equation}

    Let us consider the $j$-th column of $BC$ for an arbitrary $jin[n]$. Let $v^ell_i$ denote the $i$-th component of $boldsymbol{v^ell}$ for all $ellin[k]$ and for all $iin[m]$.
    begin{multline*}
    left((bc)_{ij}right)_{iin[m]} = left( sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{iin[m]} =
    begin{pmatrix}
    sum_{ell=1}^{k} b_{1ell}c_{ell j} \
    vdots \
    sum_{ell=1}^{k} b_{mell}c_{ell j}
    end{pmatrix} = \
    %
    =
    begin{pmatrix}
    sum_{ell=1}^{k} v^ell_1cdotlambda_{ell j} \
    vdots \
    sum_{ell=1}^{k} v^ell_mcdotlambda_{ell j}
    end{pmatrix} = sum_{ell=1}^k lambda_{ell j} %
    begin{pmatrix}
    v^ell_1 \
    vdots \
    v^ell_m
    end{pmatrix} = sum_{ell=1}^{k} lambda_{ell j}boldsymbol{v^ell} = boldsymbol{u^j},.
    end{multline*}

    Thus, the columns of $BC$ are the columns of $A$. Ergo, $A=BC$.






    On the other hand, let us consider the $i$-th row of $A$, denoted by $boldsymbol{r^i}$. That is,
    $$
    boldsymbol{r^i} = (a_{ij})_{jin[n]} qquad forall iin[m],.
    $$

    Again, by the definition of matrix multiplication,
    $$
    a_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n]
    $$

    (this is the same equation found in eq. eqref{foo}). Thus,
    begin{multline}tag{2}label{rows}
    boldsymbol{r^i} = left(sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{jin[n]} =
    begin{pmatrix}
    sum_{ell=1}^{k} b_{iell}c_{ell 1} & cdots & sum_{ell=1}^{k} b_{iell}c_{ell n}
    end{pmatrix} = \
    =
    begin{pmatrix}
    sum_{ell=1}^{k} v^ell_icdotlambda_{ell 1} & cdots & sum_{ell=1}^{k} v^ell_icdotlambda_{ell n},.
    end{pmatrix}
    end{multline}

    Now, let $boldsymbol{Lambda^ell}$ be the $ell$-th row of $C$ for all $ellin[k]$, as a row vector:
    begin{equation*}
    boldsymbol{Lambda^ell} =
    begin{pmatrix}
    lambda_{ell 1} & cdots & lambda_{ell n},.
    end{pmatrix}
    end{equation*}

    Thus, with the same notation as before, ${Lambda^ell_i} = lambda_{ell i}$ for all $iin[n]$. Also, let $mu_{i ell}$ denote $v^ell_i$ for all $iin[m]$ and for all $ellin[k]$---this is merely a change of notation to remark the fact that $v^ell_i$ can be seen as ``coefficients''. Thus, continuing to develop equation eqref{rows}, we get
    begin{multline*}
    boldsymbol{r^i} =
    begin{pmatrix}
    sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_1 & cdots & sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_n
    end{pmatrix} = \
    =
    sum_{ell=1}^k mu_{iell}
    begin{pmatrix}
    Lambda^ell_1 & cdots & Lambda^ell_n
    end{pmatrix} =
    sum_{ell=1}^k mu_{iell} boldsymbol{Lambda^ell},.
    end{multline*}






    Therefore, the rows of $A$ (i.e., $boldsymbol{r^i}$) are linear combinations of the rows of $C$ (i.e., $boldsymbol{Lambda^ell}$). Thus, we necessarily have
    $$
    mathrm{rowsp} A subseteq mathrm{rowsp} C implies dim (mathrm{rowsp} A) le dim (mathrm{rowsp} C),.
    $$

    Since $C$ has $k$ rows, its row space can have at most dimension $k$, which is the dimension of $mathrm{colsp} A$ (by hypothesis):
    $$
    dim(mathrm{rowsp} C) le k = dim(mathrm{colsp} A),.
    $$

    Combining both inequalities, we have
    $$
    dim (mathrm{rowsp} A) le dim (mathrm{colsp} A),.
    $$






    Applying this whole argument again on $A^mathrm{t}$,
    $$
    dim (mathrm{rowsp} A^mathrm{t}) le dim (mathrm{colsp} A^mathrm{t}) iff dim (mathrm{colsp} A) le dim (mathrm{rowsp} A),.
    $$

    Since we have both $dim (mathrm{rowsp} A) le dim (mathrm{colsp} A)$ and $dim (mathrm{colsp} A) le dim (mathrm{rowsp} A)$, we conclude that
    $$
    dim (mathrm{colsp} A) = dim (mathrm{rowsp} A),. quad square
    $$






    share|cite|improve this answer









    $endgroup$


















      0












      $begingroup$

      Sorry to revive an old thread. The reason I'm adding a new answer, with the same proof structure as the quoted proof, is just because I struggled too with this, and thought that writing a detailed formal proof would help me (and hopefully others) understand the proof step by step. Any feedback is welcome.



      Anyway, here is the proof.








      Proof



      Let $Ainmathcal{M}_{mtimes n}(mathbb{K})$, where $mathbb{K}$ is any field. Let ${boldsymbol{v^1},ldots, boldsymbol{v^k} }$ be a basis of the column space of $A$, where $kin{1,ldots,n}$. $boldsymbol{v^i}$ is a vector in $mathbb{K}^m$ (the vector space of $m$-tuples with entries in $mathbb{K}$) for all $iin{1,ldots,k}$. Thus each column in $A$ can be expressed as a linear combination of $boldsymbol{v^1},ldots,boldsymbol{v^k}$. That is, for every $jin{1,ldots,n}$ there exist unique coefficients $lambda_{1j},ldots,lambda_{nj}inmathbb{K}$ such that
      $$
      forall jin{1,ldots,n}qquad boldsymbol{u^j} = sum_{ell=1}^{k} lambda_{ell j} boldsymbol{v^ell},,
      $$

      where $boldsymbol{u^j}$ denotes the $j$-th column in $A$.






      Let $Binmathcal{M}_{mtimes k}(mathbb{K})$ be the matrix with such vectors as columns:
      $$
      B =
      begin{pmatrix}
      vert & & vert \
      boldsymbol{v^1} & cdots & boldsymbol{v^k} \
      vert & & vert
      end{pmatrix},.
      $$

      Let $[s]$ denote ${1,ldots,s}$ for all $sinmathbb{N}$. Let $Cinmathcal{M}_{ktimes n}(mathbb{K})$ be the matrix with the aforementioned coefficients:
      $$
      C = (lambda_{ij})_{(i,j)in[k]times[n]} =
      begin{pmatrix}
      lambda_{11} & cdots & lambda_{1n} \
      vdots & ddots & vdots \
      lambda_{k1} & cdots & lambda_{kn}
      end{pmatrix},.
      $$






      Now consider the matrix product $BCinmathcal{M}_{mtimes n}(mathbb{K})$. Let $(bc)_{ij}$, $b_{ij}$ and $c_{ij}$ denote the $(i,j)$-th element of $BC$, $B$ and $C$ respectively. By definition of matrix product,
      begin{equation}tag{1}label{foo}
      (bc)_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n],.
      end{equation}

      Let us consider the $j$-th column of $BC$ for an arbitrary $jin[n]$. Let $v^ell_i$ denote the $i$-th component of $boldsymbol{v^ell}$ for all $ellin[k]$ and for all $iin[m]$.
      begin{multline*}
      left((bc)_{ij}right)_{iin[m]} = left( sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{iin[m]} =
      begin{pmatrix}
      sum_{ell=1}^{k} b_{1ell}c_{ell j} \
      vdots \
      sum_{ell=1}^{k} b_{mell}c_{ell j}
      end{pmatrix} = \
      %
      =
      begin{pmatrix}
      sum_{ell=1}^{k} v^ell_1cdotlambda_{ell j} \
      vdots \
      sum_{ell=1}^{k} v^ell_mcdotlambda_{ell j}
      end{pmatrix} = sum_{ell=1}^k lambda_{ell j} %
      begin{pmatrix}
      v^ell_1 \
      vdots \
      v^ell_m
      end{pmatrix} = sum_{ell=1}^{k} lambda_{ell j}boldsymbol{v^ell} = boldsymbol{u^j},.
      end{multline*}

      Thus, the columns of $BC$ are the columns of $A$. Ergo, $A=BC$.






      On the other hand, let us consider the $i$-th row of $A$, denoted by $boldsymbol{r^i}$. That is,
      $$
      boldsymbol{r^i} = (a_{ij})_{jin[n]} qquad forall iin[m],.
      $$

      Again, by the definition of matrix multiplication,
      $$
      a_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n]
      $$

      (this is the same equation found in eq. eqref{foo}). Thus,
      begin{multline}tag{2}label{rows}
      boldsymbol{r^i} = left(sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{jin[n]} =
      begin{pmatrix}
      sum_{ell=1}^{k} b_{iell}c_{ell 1} & cdots & sum_{ell=1}^{k} b_{iell}c_{ell n}
      end{pmatrix} = \
      =
      begin{pmatrix}
      sum_{ell=1}^{k} v^ell_icdotlambda_{ell 1} & cdots & sum_{ell=1}^{k} v^ell_icdotlambda_{ell n},.
      end{pmatrix}
      end{multline}

      Now, let $boldsymbol{Lambda^ell}$ be the $ell$-th row of $C$ for all $ellin[k]$, as a row vector:
      begin{equation*}
      boldsymbol{Lambda^ell} =
      begin{pmatrix}
      lambda_{ell 1} & cdots & lambda_{ell n},.
      end{pmatrix}
      end{equation*}

      Thus, with the same notation as before, ${Lambda^ell_i} = lambda_{ell i}$ for all $iin[n]$. Also, let $mu_{i ell}$ denote $v^ell_i$ for all $iin[m]$ and for all $ellin[k]$---this is merely a change of notation to remark the fact that $v^ell_i$ can be seen as ``coefficients''. Thus, continuing to develop equation eqref{rows}, we get
      begin{multline*}
      boldsymbol{r^i} =
      begin{pmatrix}
      sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_1 & cdots & sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_n
      end{pmatrix} = \
      =
      sum_{ell=1}^k mu_{iell}
      begin{pmatrix}
      Lambda^ell_1 & cdots & Lambda^ell_n
      end{pmatrix} =
      sum_{ell=1}^k mu_{iell} boldsymbol{Lambda^ell},.
      end{multline*}






      Therefore, the rows of $A$ (i.e., $boldsymbol{r^i}$) are linear combinations of the rows of $C$ (i.e., $boldsymbol{Lambda^ell}$). Thus, we necessarily have
      $$
      mathrm{rowsp} A subseteq mathrm{rowsp} C implies dim (mathrm{rowsp} A) le dim (mathrm{rowsp} C),.
      $$

      Since $C$ has $k$ rows, its row space can have at most dimension $k$, which is the dimension of $mathrm{colsp} A$ (by hypothesis):
      $$
      dim(mathrm{rowsp} C) le k = dim(mathrm{colsp} A),.
      $$

      Combining both inequalities, we have
      $$
      dim (mathrm{rowsp} A) le dim (mathrm{colsp} A),.
      $$






      Applying this whole argument again on $A^mathrm{t}$,
      $$
      dim (mathrm{rowsp} A^mathrm{t}) le dim (mathrm{colsp} A^mathrm{t}) iff dim (mathrm{colsp} A) le dim (mathrm{rowsp} A),.
      $$

      Since we have both $dim (mathrm{rowsp} A) le dim (mathrm{colsp} A)$ and $dim (mathrm{colsp} A) le dim (mathrm{rowsp} A)$, we conclude that
      $$
      dim (mathrm{colsp} A) = dim (mathrm{rowsp} A),. quad square
      $$






      share|cite|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        Sorry to revive an old thread. The reason I'm adding a new answer, with the same proof structure as the quoted proof, is just because I struggled too with this, and thought that writing a detailed formal proof would help me (and hopefully others) understand the proof step by step. Any feedback is welcome.



        Anyway, here is the proof.








        Proof



        Let $Ainmathcal{M}_{mtimes n}(mathbb{K})$, where $mathbb{K}$ is any field. Let ${boldsymbol{v^1},ldots, boldsymbol{v^k} }$ be a basis of the column space of $A$, where $kin{1,ldots,n}$. $boldsymbol{v^i}$ is a vector in $mathbb{K}^m$ (the vector space of $m$-tuples with entries in $mathbb{K}$) for all $iin{1,ldots,k}$. Thus each column in $A$ can be expressed as a linear combination of $boldsymbol{v^1},ldots,boldsymbol{v^k}$. That is, for every $jin{1,ldots,n}$ there exist unique coefficients $lambda_{1j},ldots,lambda_{nj}inmathbb{K}$ such that
        $$
        forall jin{1,ldots,n}qquad boldsymbol{u^j} = sum_{ell=1}^{k} lambda_{ell j} boldsymbol{v^ell},,
        $$

        where $boldsymbol{u^j}$ denotes the $j$-th column in $A$.






        Let $Binmathcal{M}_{mtimes k}(mathbb{K})$ be the matrix with such vectors as columns:
        $$
        B =
        begin{pmatrix}
        vert & & vert \
        boldsymbol{v^1} & cdots & boldsymbol{v^k} \
        vert & & vert
        end{pmatrix},.
        $$

        Let $[s]$ denote ${1,ldots,s}$ for all $sinmathbb{N}$. Let $Cinmathcal{M}_{ktimes n}(mathbb{K})$ be the matrix with the aforementioned coefficients:
        $$
        C = (lambda_{ij})_{(i,j)in[k]times[n]} =
        begin{pmatrix}
        lambda_{11} & cdots & lambda_{1n} \
        vdots & ddots & vdots \
        lambda_{k1} & cdots & lambda_{kn}
        end{pmatrix},.
        $$






        Now consider the matrix product $BCinmathcal{M}_{mtimes n}(mathbb{K})$. Let $(bc)_{ij}$, $b_{ij}$ and $c_{ij}$ denote the $(i,j)$-th element of $BC$, $B$ and $C$ respectively. By definition of matrix product,
        begin{equation}tag{1}label{foo}
        (bc)_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n],.
        end{equation}

        Let us consider the $j$-th column of $BC$ for an arbitrary $jin[n]$. Let $v^ell_i$ denote the $i$-th component of $boldsymbol{v^ell}$ for all $ellin[k]$ and for all $iin[m]$.
        begin{multline*}
        left((bc)_{ij}right)_{iin[m]} = left( sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{iin[m]} =
        begin{pmatrix}
        sum_{ell=1}^{k} b_{1ell}c_{ell j} \
        vdots \
        sum_{ell=1}^{k} b_{mell}c_{ell j}
        end{pmatrix} = \
        %
        =
        begin{pmatrix}
        sum_{ell=1}^{k} v^ell_1cdotlambda_{ell j} \
        vdots \
        sum_{ell=1}^{k} v^ell_mcdotlambda_{ell j}
        end{pmatrix} = sum_{ell=1}^k lambda_{ell j} %
        begin{pmatrix}
        v^ell_1 \
        vdots \
        v^ell_m
        end{pmatrix} = sum_{ell=1}^{k} lambda_{ell j}boldsymbol{v^ell} = boldsymbol{u^j},.
        end{multline*}

        Thus, the columns of $BC$ are the columns of $A$. Ergo, $A=BC$.






        On the other hand, let us consider the $i$-th row of $A$, denoted by $boldsymbol{r^i}$. That is,
        $$
        boldsymbol{r^i} = (a_{ij})_{jin[n]} qquad forall iin[m],.
        $$

        Again, by the definition of matrix multiplication,
        $$
        a_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n]
        $$

        (this is the same equation found in eq. eqref{foo}). Thus,
        begin{multline}tag{2}label{rows}
        boldsymbol{r^i} = left(sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{jin[n]} =
        begin{pmatrix}
        sum_{ell=1}^{k} b_{iell}c_{ell 1} & cdots & sum_{ell=1}^{k} b_{iell}c_{ell n}
        end{pmatrix} = \
        =
        begin{pmatrix}
        sum_{ell=1}^{k} v^ell_icdotlambda_{ell 1} & cdots & sum_{ell=1}^{k} v^ell_icdotlambda_{ell n},.
        end{pmatrix}
        end{multline}

        Now, let $boldsymbol{Lambda^ell}$ be the $ell$-th row of $C$ for all $ellin[k]$, as a row vector:
        begin{equation*}
        boldsymbol{Lambda^ell} =
        begin{pmatrix}
        lambda_{ell 1} & cdots & lambda_{ell n},.
        end{pmatrix}
        end{equation*}

        Thus, with the same notation as before, ${Lambda^ell_i} = lambda_{ell i}$ for all $iin[n]$. Also, let $mu_{i ell}$ denote $v^ell_i$ for all $iin[m]$ and for all $ellin[k]$---this is merely a change of notation to remark the fact that $v^ell_i$ can be seen as ``coefficients''. Thus, continuing to develop equation eqref{rows}, we get
        begin{multline*}
        boldsymbol{r^i} =
        begin{pmatrix}
        sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_1 & cdots & sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_n
        end{pmatrix} = \
        =
        sum_{ell=1}^k mu_{iell}
        begin{pmatrix}
        Lambda^ell_1 & cdots & Lambda^ell_n
        end{pmatrix} =
        sum_{ell=1}^k mu_{iell} boldsymbol{Lambda^ell},.
        end{multline*}






        Therefore, the rows of $A$ (i.e., $boldsymbol{r^i}$) are linear combinations of the rows of $C$ (i.e., $boldsymbol{Lambda^ell}$). Thus, we necessarily have
        $$
        mathrm{rowsp} A subseteq mathrm{rowsp} C implies dim (mathrm{rowsp} A) le dim (mathrm{rowsp} C),.
        $$

        Since $C$ has $k$ rows, its row space can have at most dimension $k$, which is the dimension of $mathrm{colsp} A$ (by hypothesis):
        $$
        dim(mathrm{rowsp} C) le k = dim(mathrm{colsp} A),.
        $$

        Combining both inequalities, we have
        $$
        dim (mathrm{rowsp} A) le dim (mathrm{colsp} A),.
        $$






        Applying this whole argument again on $A^mathrm{t}$,
        $$
        dim (mathrm{rowsp} A^mathrm{t}) le dim (mathrm{colsp} A^mathrm{t}) iff dim (mathrm{colsp} A) le dim (mathrm{rowsp} A),.
        $$

        Since we have both $dim (mathrm{rowsp} A) le dim (mathrm{colsp} A)$ and $dim (mathrm{colsp} A) le dim (mathrm{rowsp} A)$, we conclude that
        $$
        dim (mathrm{colsp} A) = dim (mathrm{rowsp} A),. quad square
        $$






        share|cite|improve this answer









        $endgroup$



        Sorry to revive an old thread. The reason I'm adding a new answer, with the same proof structure as the quoted proof, is just because I struggled too with this, and thought that writing a detailed formal proof would help me (and hopefully others) understand the proof step by step. Any feedback is welcome.



        Anyway, here is the proof.








        Proof



        Let $Ainmathcal{M}_{mtimes n}(mathbb{K})$, where $mathbb{K}$ is any field. Let ${boldsymbol{v^1},ldots, boldsymbol{v^k} }$ be a basis of the column space of $A$, where $kin{1,ldots,n}$. $boldsymbol{v^i}$ is a vector in $mathbb{K}^m$ (the vector space of $m$-tuples with entries in $mathbb{K}$) for all $iin{1,ldots,k}$. Thus each column in $A$ can be expressed as a linear combination of $boldsymbol{v^1},ldots,boldsymbol{v^k}$. That is, for every $jin{1,ldots,n}$ there exist unique coefficients $lambda_{1j},ldots,lambda_{nj}inmathbb{K}$ such that
        $$
        forall jin{1,ldots,n}qquad boldsymbol{u^j} = sum_{ell=1}^{k} lambda_{ell j} boldsymbol{v^ell},,
        $$

        where $boldsymbol{u^j}$ denotes the $j$-th column in $A$.






        Let $Binmathcal{M}_{mtimes k}(mathbb{K})$ be the matrix with such vectors as columns:
        $$
        B =
        begin{pmatrix}
        vert & & vert \
        boldsymbol{v^1} & cdots & boldsymbol{v^k} \
        vert & & vert
        end{pmatrix},.
        $$

        Let $[s]$ denote ${1,ldots,s}$ for all $sinmathbb{N}$. Let $Cinmathcal{M}_{ktimes n}(mathbb{K})$ be the matrix with the aforementioned coefficients:
        $$
        C = (lambda_{ij})_{(i,j)in[k]times[n]} =
        begin{pmatrix}
        lambda_{11} & cdots & lambda_{1n} \
        vdots & ddots & vdots \
        lambda_{k1} & cdots & lambda_{kn}
        end{pmatrix},.
        $$






        Now consider the matrix product $BCinmathcal{M}_{mtimes n}(mathbb{K})$. Let $(bc)_{ij}$, $b_{ij}$ and $c_{ij}$ denote the $(i,j)$-th element of $BC$, $B$ and $C$ respectively. By definition of matrix product,
        begin{equation}tag{1}label{foo}
        (bc)_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n],.
        end{equation}

        Let us consider the $j$-th column of $BC$ for an arbitrary $jin[n]$. Let $v^ell_i$ denote the $i$-th component of $boldsymbol{v^ell}$ for all $ellin[k]$ and for all $iin[m]$.
        begin{multline*}
        left((bc)_{ij}right)_{iin[m]} = left( sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{iin[m]} =
        begin{pmatrix}
        sum_{ell=1}^{k} b_{1ell}c_{ell j} \
        vdots \
        sum_{ell=1}^{k} b_{mell}c_{ell j}
        end{pmatrix} = \
        %
        =
        begin{pmatrix}
        sum_{ell=1}^{k} v^ell_1cdotlambda_{ell j} \
        vdots \
        sum_{ell=1}^{k} v^ell_mcdotlambda_{ell j}
        end{pmatrix} = sum_{ell=1}^k lambda_{ell j} %
        begin{pmatrix}
        v^ell_1 \
        vdots \
        v^ell_m
        end{pmatrix} = sum_{ell=1}^{k} lambda_{ell j}boldsymbol{v^ell} = boldsymbol{u^j},.
        end{multline*}

        Thus, the columns of $BC$ are the columns of $A$. Ergo, $A=BC$.






        On the other hand, let us consider the $i$-th row of $A$, denoted by $boldsymbol{r^i}$. That is,
        $$
        boldsymbol{r^i} = (a_{ij})_{jin[n]} qquad forall iin[m],.
        $$

        Again, by the definition of matrix multiplication,
        $$
        a_{ij} = sum_{ell=1}^{k} b_{iell}c_{ell j} qquadforall (i,j)in[m]times[n]
        $$

        (this is the same equation found in eq. eqref{foo}). Thus,
        begin{multline}tag{2}label{rows}
        boldsymbol{r^i} = left(sum_{ell=1}^{k} b_{iell}c_{ell j} right)_{jin[n]} =
        begin{pmatrix}
        sum_{ell=1}^{k} b_{iell}c_{ell 1} & cdots & sum_{ell=1}^{k} b_{iell}c_{ell n}
        end{pmatrix} = \
        =
        begin{pmatrix}
        sum_{ell=1}^{k} v^ell_icdotlambda_{ell 1} & cdots & sum_{ell=1}^{k} v^ell_icdotlambda_{ell n},.
        end{pmatrix}
        end{multline}

        Now, let $boldsymbol{Lambda^ell}$ be the $ell$-th row of $C$ for all $ellin[k]$, as a row vector:
        begin{equation*}
        boldsymbol{Lambda^ell} =
        begin{pmatrix}
        lambda_{ell 1} & cdots & lambda_{ell n},.
        end{pmatrix}
        end{equation*}

        Thus, with the same notation as before, ${Lambda^ell_i} = lambda_{ell i}$ for all $iin[n]$. Also, let $mu_{i ell}$ denote $v^ell_i$ for all $iin[m]$ and for all $ellin[k]$---this is merely a change of notation to remark the fact that $v^ell_i$ can be seen as ``coefficients''. Thus, continuing to develop equation eqref{rows}, we get
        begin{multline*}
        boldsymbol{r^i} =
        begin{pmatrix}
        sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_1 & cdots & sum_{ell=1}^{k} mu_{iell}cdotLambda^ell_n
        end{pmatrix} = \
        =
        sum_{ell=1}^k mu_{iell}
        begin{pmatrix}
        Lambda^ell_1 & cdots & Lambda^ell_n
        end{pmatrix} =
        sum_{ell=1}^k mu_{iell} boldsymbol{Lambda^ell},.
        end{multline*}






        Therefore, the rows of $A$ (i.e., $boldsymbol{r^i}$) are linear combinations of the rows of $C$ (i.e., $boldsymbol{Lambda^ell}$). Thus, we necessarily have
        $$
        mathrm{rowsp} A subseteq mathrm{rowsp} C implies dim (mathrm{rowsp} A) le dim (mathrm{rowsp} C),.
        $$

        Since $C$ has $k$ rows, its row space can have at most dimension $k$, which is the dimension of $mathrm{colsp} A$ (by hypothesis):
        $$
        dim(mathrm{rowsp} C) le k = dim(mathrm{colsp} A),.
        $$

        Combining both inequalities, we have
        $$
        dim (mathrm{rowsp} A) le dim (mathrm{colsp} A),.
        $$






        Applying this whole argument again on $A^mathrm{t}$,
        $$
        dim (mathrm{rowsp} A^mathrm{t}) le dim (mathrm{colsp} A^mathrm{t}) iff dim (mathrm{colsp} A) le dim (mathrm{rowsp} A),.
        $$

        Since we have both $dim (mathrm{rowsp} A) le dim (mathrm{colsp} A)$ and $dim (mathrm{colsp} A) le dim (mathrm{rowsp} A)$, we conclude that
        $$
        dim (mathrm{colsp} A) = dim (mathrm{rowsp} A),. quad square
        $$







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Jan 6 at 13:05









        AnakhandAnakhand

        12311




        12311






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1900437%2fproof-that-the-dimension-of-a-matrix-row-space-is-equal-to-the-dimension-of-its%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            1300-talet

            1300-talet

            Display a custom attribute below product name in the front-end Magento 1.9.3.8