Why is the exterior power $bigwedge^kV$ an irreducible representation of $GL(V)$?












6














$newcommand{GL}{operatorname{GL}}$



Let $V$ be a real $n$-dimensional vector space. For $1<k<n$ we have a natural representation of $GL(V)$ via the $k$ exterior power:



$rho:GL(V) to GL(bigwedge^kV)$, given by $rho(A)=bigwedge^k A$. I am trying to show $rho$ is an irreducible representation. Let $0neq W le bigwedge^kV$ be a subrepresentation. If we can show $W$ contains a non-zero decomposable element, we are done.



Indeed, suppose $W subsetneq bigwedge^kV$. Then, there exist a decomposable element $sigma=v_1 wedge dots wedge v_k neq 0$, such that $sigma notin W$. We assumed $W$ contains a non-zero decomposable element $sigma'=u_1 wedge dots wedge u_k neq 0$. Define a map $A in GL(V)$ by extending $u_i to v_i$. Then



$$rho(A) (sigma')=bigwedge^k A(u_1 wedge dots wedge u_k)=sigma notin W,$$



while $sigma' in W$, con




So, the question reduces to the following: Why does every non-zero subrepresentation contain a non-zero decomposable element?




I asked an even more naive question here-whether or not every subspace of dimension greater than $1$ contains a non-zero decomposable element?










share|cite|improve this question




















  • 3




    A nice conceptual way to work with this is to first decompose the representation as a module for the group of diagonal matrices (into so-called weight spaces). Then note what happens to the weight of a vector in one of these subspaces when one acts by suitable upper triangular unipotent matrices.
    – Tobias Kildetoft
    2 days ago










  • Thanks. Unfortunately, I really know barely nothing about the machinery of representation theory. Can you please elaborate on this or give me a reference? (I don't know what a weight of a vector is, and naive googling only found something in the context of representations of Lie algebras, not Lie groups).
    – Asaf Shachar
    2 days ago










  • The definition is essentially the same. The representation decomposes as a sum of $1$-dimensional subspaces, and a vector in such a subspace will be acted on via a scalar. This scalar depends on the element acting, giving a linear character of the subgroup of diagonal matrices, and this character is what is called the weight of the vector. It may be a bit much to get into if none of this is familiar, but I would still advice you to try writing it up explicitly for $k=1$ when $dim(V) = 2$ to get a feel for what happens.
    – Tobias Kildetoft
    2 days ago
















6














$newcommand{GL}{operatorname{GL}}$



Let $V$ be a real $n$-dimensional vector space. For $1<k<n$ we have a natural representation of $GL(V)$ via the $k$ exterior power:



$rho:GL(V) to GL(bigwedge^kV)$, given by $rho(A)=bigwedge^k A$. I am trying to show $rho$ is an irreducible representation. Let $0neq W le bigwedge^kV$ be a subrepresentation. If we can show $W$ contains a non-zero decomposable element, we are done.



Indeed, suppose $W subsetneq bigwedge^kV$. Then, there exist a decomposable element $sigma=v_1 wedge dots wedge v_k neq 0$, such that $sigma notin W$. We assumed $W$ contains a non-zero decomposable element $sigma'=u_1 wedge dots wedge u_k neq 0$. Define a map $A in GL(V)$ by extending $u_i to v_i$. Then



$$rho(A) (sigma')=bigwedge^k A(u_1 wedge dots wedge u_k)=sigma notin W,$$



while $sigma' in W$, con




So, the question reduces to the following: Why does every non-zero subrepresentation contain a non-zero decomposable element?




I asked an even more naive question here-whether or not every subspace of dimension greater than $1$ contains a non-zero decomposable element?










share|cite|improve this question




















  • 3




    A nice conceptual way to work with this is to first decompose the representation as a module for the group of diagonal matrices (into so-called weight spaces). Then note what happens to the weight of a vector in one of these subspaces when one acts by suitable upper triangular unipotent matrices.
    – Tobias Kildetoft
    2 days ago










  • Thanks. Unfortunately, I really know barely nothing about the machinery of representation theory. Can you please elaborate on this or give me a reference? (I don't know what a weight of a vector is, and naive googling only found something in the context of representations of Lie algebras, not Lie groups).
    – Asaf Shachar
    2 days ago










  • The definition is essentially the same. The representation decomposes as a sum of $1$-dimensional subspaces, and a vector in such a subspace will be acted on via a scalar. This scalar depends on the element acting, giving a linear character of the subgroup of diagonal matrices, and this character is what is called the weight of the vector. It may be a bit much to get into if none of this is familiar, but I would still advice you to try writing it up explicitly for $k=1$ when $dim(V) = 2$ to get a feel for what happens.
    – Tobias Kildetoft
    2 days ago














6












6








6


1





$newcommand{GL}{operatorname{GL}}$



Let $V$ be a real $n$-dimensional vector space. For $1<k<n$ we have a natural representation of $GL(V)$ via the $k$ exterior power:



$rho:GL(V) to GL(bigwedge^kV)$, given by $rho(A)=bigwedge^k A$. I am trying to show $rho$ is an irreducible representation. Let $0neq W le bigwedge^kV$ be a subrepresentation. If we can show $W$ contains a non-zero decomposable element, we are done.



Indeed, suppose $W subsetneq bigwedge^kV$. Then, there exist a decomposable element $sigma=v_1 wedge dots wedge v_k neq 0$, such that $sigma notin W$. We assumed $W$ contains a non-zero decomposable element $sigma'=u_1 wedge dots wedge u_k neq 0$. Define a map $A in GL(V)$ by extending $u_i to v_i$. Then



$$rho(A) (sigma')=bigwedge^k A(u_1 wedge dots wedge u_k)=sigma notin W,$$



while $sigma' in W$, con




So, the question reduces to the following: Why does every non-zero subrepresentation contain a non-zero decomposable element?




I asked an even more naive question here-whether or not every subspace of dimension greater than $1$ contains a non-zero decomposable element?










share|cite|improve this question















$newcommand{GL}{operatorname{GL}}$



Let $V$ be a real $n$-dimensional vector space. For $1<k<n$ we have a natural representation of $GL(V)$ via the $k$ exterior power:



$rho:GL(V) to GL(bigwedge^kV)$, given by $rho(A)=bigwedge^k A$. I am trying to show $rho$ is an irreducible representation. Let $0neq W le bigwedge^kV$ be a subrepresentation. If we can show $W$ contains a non-zero decomposable element, we are done.



Indeed, suppose $W subsetneq bigwedge^kV$. Then, there exist a decomposable element $sigma=v_1 wedge dots wedge v_k neq 0$, such that $sigma notin W$. We assumed $W$ contains a non-zero decomposable element $sigma'=u_1 wedge dots wedge u_k neq 0$. Define a map $A in GL(V)$ by extending $u_i to v_i$. Then



$$rho(A) (sigma')=bigwedge^k A(u_1 wedge dots wedge u_k)=sigma notin W,$$



while $sigma' in W$, con




So, the question reduces to the following: Why does every non-zero subrepresentation contain a non-zero decomposable element?




I asked an even more naive question here-whether or not every subspace of dimension greater than $1$ contains a non-zero decomposable element?







group-theory representation-theory lie-groups exterior-algebra tensor-decomposition






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited yesterday









Zvi

4,960430




4,960430










asked 2 days ago









Asaf Shachar

5,1293941




5,1293941








  • 3




    A nice conceptual way to work with this is to first decompose the representation as a module for the group of diagonal matrices (into so-called weight spaces). Then note what happens to the weight of a vector in one of these subspaces when one acts by suitable upper triangular unipotent matrices.
    – Tobias Kildetoft
    2 days ago










  • Thanks. Unfortunately, I really know barely nothing about the machinery of representation theory. Can you please elaborate on this or give me a reference? (I don't know what a weight of a vector is, and naive googling only found something in the context of representations of Lie algebras, not Lie groups).
    – Asaf Shachar
    2 days ago










  • The definition is essentially the same. The representation decomposes as a sum of $1$-dimensional subspaces, and a vector in such a subspace will be acted on via a scalar. This scalar depends on the element acting, giving a linear character of the subgroup of diagonal matrices, and this character is what is called the weight of the vector. It may be a bit much to get into if none of this is familiar, but I would still advice you to try writing it up explicitly for $k=1$ when $dim(V) = 2$ to get a feel for what happens.
    – Tobias Kildetoft
    2 days ago














  • 3




    A nice conceptual way to work with this is to first decompose the representation as a module for the group of diagonal matrices (into so-called weight spaces). Then note what happens to the weight of a vector in one of these subspaces when one acts by suitable upper triangular unipotent matrices.
    – Tobias Kildetoft
    2 days ago










  • Thanks. Unfortunately, I really know barely nothing about the machinery of representation theory. Can you please elaborate on this or give me a reference? (I don't know what a weight of a vector is, and naive googling only found something in the context of representations of Lie algebras, not Lie groups).
    – Asaf Shachar
    2 days ago










  • The definition is essentially the same. The representation decomposes as a sum of $1$-dimensional subspaces, and a vector in such a subspace will be acted on via a scalar. This scalar depends on the element acting, giving a linear character of the subgroup of diagonal matrices, and this character is what is called the weight of the vector. It may be a bit much to get into if none of this is familiar, but I would still advice you to try writing it up explicitly for $k=1$ when $dim(V) = 2$ to get a feel for what happens.
    – Tobias Kildetoft
    2 days ago








3




3




A nice conceptual way to work with this is to first decompose the representation as a module for the group of diagonal matrices (into so-called weight spaces). Then note what happens to the weight of a vector in one of these subspaces when one acts by suitable upper triangular unipotent matrices.
– Tobias Kildetoft
2 days ago




A nice conceptual way to work with this is to first decompose the representation as a module for the group of diagonal matrices (into so-called weight spaces). Then note what happens to the weight of a vector in one of these subspaces when one acts by suitable upper triangular unipotent matrices.
– Tobias Kildetoft
2 days ago












Thanks. Unfortunately, I really know barely nothing about the machinery of representation theory. Can you please elaborate on this or give me a reference? (I don't know what a weight of a vector is, and naive googling only found something in the context of representations of Lie algebras, not Lie groups).
– Asaf Shachar
2 days ago




Thanks. Unfortunately, I really know barely nothing about the machinery of representation theory. Can you please elaborate on this or give me a reference? (I don't know what a weight of a vector is, and naive googling only found something in the context of representations of Lie algebras, not Lie groups).
– Asaf Shachar
2 days ago












The definition is essentially the same. The representation decomposes as a sum of $1$-dimensional subspaces, and a vector in such a subspace will be acted on via a scalar. This scalar depends on the element acting, giving a linear character of the subgroup of diagonal matrices, and this character is what is called the weight of the vector. It may be a bit much to get into if none of this is familiar, but I would still advice you to try writing it up explicitly for $k=1$ when $dim(V) = 2$ to get a feel for what happens.
– Tobias Kildetoft
2 days ago




The definition is essentially the same. The representation decomposes as a sum of $1$-dimensional subspaces, and a vector in such a subspace will be acted on via a scalar. This scalar depends on the element acting, giving a linear character of the subgroup of diagonal matrices, and this character is what is called the weight of the vector. It may be a bit much to get into if none of this is familiar, but I would still advice you to try writing it up explicitly for $k=1$ when $dim(V) = 2$ to get a feel for what happens.
– Tobias Kildetoft
2 days ago










1 Answer
1






active

oldest

votes


















3














Pick a basis $e_1, dots e_n$ of $V$ so that we can identify $GL(V)$ with $GL_n(F)$ (we'll start out working with an arbitrary base field $F$ and then restrict $F$ later). Write $T$ for the subgroup of $GL_n(F)$ consisting of diagonal matrices. An element of $T$ consists of some diagonal elements $(t_1, dots t_n)$ and acts on $Lambda^k(V)$ by sending $e_i$ to $t_i e_i$, then extending multiplicatively.



What this means is that each pure tensor $e_{i_1} wedge e_{i_2} wedge dots wedge e_{i_k} in Lambda^k(V)$ is a simultaneous eigenvector for every element of $T$; said another way, it spans a $1$-dimensional (hence simple) subrepresentation of $Lambda^k(V)$, considered as a representation of $T$. (These are the "weight spaces" of this representation.) Since $Lambda^k(V)$ is the direct sum of these $1$-dimensional subspaces, it follows that $Lambda^k(V)$ is semisimple as a representation of $T$.



The significance of semisimplicity is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ is also a $T$-subrepresentation, and subrepresentations of semisimple representations are semisimple; they must also have the same simple components, in the same or smaller multiplicities. Moreover, if $F$ is any field except $mathbb{F}_2$ (over $mathbb{F}_2$, unfortunately, $T$ is the trivial group), the different $1$-dimensional representations above are all nonisomorphic. The conclusion from here is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ must be a direct sum of weight spaces.



But now we're done (again, for any field $F$ except $mathbb{F}_2$), for example because $GL(V)$ acts transitively on these weight spaces.






share|cite|improve this answer





















    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3060379%2fwhy-is-the-exterior-power-bigwedgekv-an-irreducible-representation-of-glv%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3














    Pick a basis $e_1, dots e_n$ of $V$ so that we can identify $GL(V)$ with $GL_n(F)$ (we'll start out working with an arbitrary base field $F$ and then restrict $F$ later). Write $T$ for the subgroup of $GL_n(F)$ consisting of diagonal matrices. An element of $T$ consists of some diagonal elements $(t_1, dots t_n)$ and acts on $Lambda^k(V)$ by sending $e_i$ to $t_i e_i$, then extending multiplicatively.



    What this means is that each pure tensor $e_{i_1} wedge e_{i_2} wedge dots wedge e_{i_k} in Lambda^k(V)$ is a simultaneous eigenvector for every element of $T$; said another way, it spans a $1$-dimensional (hence simple) subrepresentation of $Lambda^k(V)$, considered as a representation of $T$. (These are the "weight spaces" of this representation.) Since $Lambda^k(V)$ is the direct sum of these $1$-dimensional subspaces, it follows that $Lambda^k(V)$ is semisimple as a representation of $T$.



    The significance of semisimplicity is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ is also a $T$-subrepresentation, and subrepresentations of semisimple representations are semisimple; they must also have the same simple components, in the same or smaller multiplicities. Moreover, if $F$ is any field except $mathbb{F}_2$ (over $mathbb{F}_2$, unfortunately, $T$ is the trivial group), the different $1$-dimensional representations above are all nonisomorphic. The conclusion from here is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ must be a direct sum of weight spaces.



    But now we're done (again, for any field $F$ except $mathbb{F}_2$), for example because $GL(V)$ acts transitively on these weight spaces.






    share|cite|improve this answer


























      3














      Pick a basis $e_1, dots e_n$ of $V$ so that we can identify $GL(V)$ with $GL_n(F)$ (we'll start out working with an arbitrary base field $F$ and then restrict $F$ later). Write $T$ for the subgroup of $GL_n(F)$ consisting of diagonal matrices. An element of $T$ consists of some diagonal elements $(t_1, dots t_n)$ and acts on $Lambda^k(V)$ by sending $e_i$ to $t_i e_i$, then extending multiplicatively.



      What this means is that each pure tensor $e_{i_1} wedge e_{i_2} wedge dots wedge e_{i_k} in Lambda^k(V)$ is a simultaneous eigenvector for every element of $T$; said another way, it spans a $1$-dimensional (hence simple) subrepresentation of $Lambda^k(V)$, considered as a representation of $T$. (These are the "weight spaces" of this representation.) Since $Lambda^k(V)$ is the direct sum of these $1$-dimensional subspaces, it follows that $Lambda^k(V)$ is semisimple as a representation of $T$.



      The significance of semisimplicity is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ is also a $T$-subrepresentation, and subrepresentations of semisimple representations are semisimple; they must also have the same simple components, in the same or smaller multiplicities. Moreover, if $F$ is any field except $mathbb{F}_2$ (over $mathbb{F}_2$, unfortunately, $T$ is the trivial group), the different $1$-dimensional representations above are all nonisomorphic. The conclusion from here is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ must be a direct sum of weight spaces.



      But now we're done (again, for any field $F$ except $mathbb{F}_2$), for example because $GL(V)$ acts transitively on these weight spaces.






      share|cite|improve this answer
























        3












        3








        3






        Pick a basis $e_1, dots e_n$ of $V$ so that we can identify $GL(V)$ with $GL_n(F)$ (we'll start out working with an arbitrary base field $F$ and then restrict $F$ later). Write $T$ for the subgroup of $GL_n(F)$ consisting of diagonal matrices. An element of $T$ consists of some diagonal elements $(t_1, dots t_n)$ and acts on $Lambda^k(V)$ by sending $e_i$ to $t_i e_i$, then extending multiplicatively.



        What this means is that each pure tensor $e_{i_1} wedge e_{i_2} wedge dots wedge e_{i_k} in Lambda^k(V)$ is a simultaneous eigenvector for every element of $T$; said another way, it spans a $1$-dimensional (hence simple) subrepresentation of $Lambda^k(V)$, considered as a representation of $T$. (These are the "weight spaces" of this representation.) Since $Lambda^k(V)$ is the direct sum of these $1$-dimensional subspaces, it follows that $Lambda^k(V)$ is semisimple as a representation of $T$.



        The significance of semisimplicity is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ is also a $T$-subrepresentation, and subrepresentations of semisimple representations are semisimple; they must also have the same simple components, in the same or smaller multiplicities. Moreover, if $F$ is any field except $mathbb{F}_2$ (over $mathbb{F}_2$, unfortunately, $T$ is the trivial group), the different $1$-dimensional representations above are all nonisomorphic. The conclusion from here is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ must be a direct sum of weight spaces.



        But now we're done (again, for any field $F$ except $mathbb{F}_2$), for example because $GL(V)$ acts transitively on these weight spaces.






        share|cite|improve this answer












        Pick a basis $e_1, dots e_n$ of $V$ so that we can identify $GL(V)$ with $GL_n(F)$ (we'll start out working with an arbitrary base field $F$ and then restrict $F$ later). Write $T$ for the subgroup of $GL_n(F)$ consisting of diagonal matrices. An element of $T$ consists of some diagonal elements $(t_1, dots t_n)$ and acts on $Lambda^k(V)$ by sending $e_i$ to $t_i e_i$, then extending multiplicatively.



        What this means is that each pure tensor $e_{i_1} wedge e_{i_2} wedge dots wedge e_{i_k} in Lambda^k(V)$ is a simultaneous eigenvector for every element of $T$; said another way, it spans a $1$-dimensional (hence simple) subrepresentation of $Lambda^k(V)$, considered as a representation of $T$. (These are the "weight spaces" of this representation.) Since $Lambda^k(V)$ is the direct sum of these $1$-dimensional subspaces, it follows that $Lambda^k(V)$ is semisimple as a representation of $T$.



        The significance of semisimplicity is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ is also a $T$-subrepresentation, and subrepresentations of semisimple representations are semisimple; they must also have the same simple components, in the same or smaller multiplicities. Moreover, if $F$ is any field except $mathbb{F}_2$ (over $mathbb{F}_2$, unfortunately, $T$ is the trivial group), the different $1$-dimensional representations above are all nonisomorphic. The conclusion from here is that any $GL(V)$-subrepresentation of $Lambda^k(V)$ must be a direct sum of weight spaces.



        But now we're done (again, for any field $F$ except $mathbb{F}_2$), for example because $GL(V)$ acts transitively on these weight spaces.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered 10 hours ago









        Qiaochu Yuan

        277k32581919




        277k32581919






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3060379%2fwhy-is-the-exterior-power-bigwedgekv-an-irreducible-representation-of-glv%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            An IMO inspired problem

            Management

            Has there ever been an instance of an active nuclear power plant within or near a war zone?