The QH QM QA thread.

This means that the 2x2 matrices act on a 2 component, real vector. This means that we have 2 degrees of freedom---let's call them ``spin up'' and ``spin down'', .
Not sure I get this. Do you mean, for example, that for some vector $$v \in R^3$$, say, the image under the action of SU(2) will have "components" (unfortunate choice of words in this context!) wil be, say $$\begin{pmatrix}0 \\ i \end{pmatrix}v$$ and $$\begin{pmatrix}i \\ 0 \end{pmatrix}v$$? I'm not sure I like this.

I think I found another way to come at this, which I outline sketchily.

I worked out the algebra for SU(2) and foubd that
$$[X,Y] =-2Z$$
$$[Z,X] = -2Y$$
$$[Z,Y] = 2X$$

this suggested that my original bases were too big by a factor of 2. So I scaled them accordingly, which by exponentiation (the standard way to recover the group from its algebra), gives, for some parameter $$t, \;t(0)=e,$$ group elements like this one

$$\exp(tX) = \begin{pmatrix}\cos \frac{t}{2}& i \sin \frac{t}{2}
\\
\\
\\
i \sin \frac{t}{2} & \cos \frac{t}{2}\end{pmatrix} $$.

This suggests that the symmetry of SU(2) is such that $$ t(0) =e = t + 4 \pi n$$ for integer n, in contrast with the SO(3) symmetry which is $$s(0) = e = s + 2 \pi n$$. Well, there's a bit more to it than that, but isn't that what "spin 1/2" means?
 
Lemme stop you right here. If I am allowed to think of the as local coordinates on some manifold, then I will interpret as a co-tangent vector. This is standard notation - is that what you mean here? Or, without getting fancy, is this beast a co-vector? Likewise, is a vector?
I think you're going a little over the top here. In the rigorous definition/construction you are right. You start with a scalar $$\phi$$ and construct $$e_{\mu}[\phi] = \frac{\partial}{\partial x^{\mu}}\phi$$ in the manifold and then use $$\eta^{\mu\nu} : V \to V^{\ast}$$ to make this (or $$e_{\nu}$$ into the dual space, which allows you then to make the combination back into the space of scalar fields. However, bear in mind that short of doing maths like Dirac or Witten do/did, ie the rigorous construction of the mathematical notion of field theory, this is overkill.

$$\partial^{\mu}\phi^{\ast}\partial_{\mu}\phi = \partial_{t}\phi^{\ast}\partial_{t}\phi - \nabla \phi^{\ast}.\nabla\phi$$. You can think of this as the cost of energy the field has to move through time and space. $$m^{2}\phi\phi^{\ast}$$ is the energy simple to have a notion of an oscillation within the field. The cost of being a perturbation from zero. Otherwise known as rest mass/energy.
Anyway, to continue.This is a nice choice of transformation, since $$e^{-i \alpha}e^{i \alpha} = e^0 = 1$$, this implies that $$e^{-i \alpha}\phi e^{i \alpha}\phi^* = \phi \phi^*$$. This transformation is an isometry!
This is not a coincidence. It's the only way that interactions of the form $$f(\phi^{\ast}\phi)$$ can be made invariant.
Aargh, phucking physicists!! You explicitly stated that is a scalar field and that is a vector field. When will you guys learn that a) a tensor is not the same as a tensor field and that b) tensors can be satisfactorily defined as objects without reference to their transformation properties.

Grrr. Didn't I go to some length to explain this in another thread? (PS, I love you....)
Physicists are generally aware of this. For instance, the notion of a Dirac monopole is due to the inability to create a globally defined vector bundle. At any given point you have a vector but not a global vector field.

Part of my work involves the transformation of a space with a globally defined set of vector fields and a metric to a space with neither.
I also would like to know what is difference between "charge" and "flavour". How these are reflected in Lagrangian for example?
Charge is the result of the Lagrangian being invariant under a particular gauge transformation. EM charge is due to a U(1) symmetry, as Ben mentions. Colour is 'strong charge' and is also conserved.

Flavour is an approximate SU(3) symmetry. This is NOT the same SU(3) as colour, it's just a confusing coincidence in the strong sector. Flavour is labelled by indices in the Lagrangian (often the QCD Lagrangian involves $$\sum_{c=1}^{6}$$, respecting the 6-fold nature of flavour.
Find a basis for it's algebra! I confess I am still slightly confused by Ben's terminology, but it looks like this what he is getting at - the algebra of U(1) has a single basis vector, the algebra of SU(2) has three, the algebra of SU(3) has 8 basis vectors. These will define the space of all actions on these groups (I think - correct me someone if I misunderstood the question)
Yep. The dimension of the Lie algebra tells you the number of gauge bosons, because you have the number of indepentend 'gauges' in the system. There's 1 photon, 3 weak bosons and 8 gluons.
Is the vector space really real? The matrices above involve imaginary units..
Any $$dim_{\mathbb{C}}M = n$$ dimensional manifold is equivalent to a $$dim_{\mathbb{R}}M = 2n$$ dimensional manifold. Just take $$i \to \left( \array{cc} -1 & 0 \\ 0 & 1 \end{array} \right)$$ or some other 2x2 matrix which satisfies $$m^{2}=-\mathbb{I}_{2}$$. You're just picking a basis over $$\mathbb{R}^{2}$$ rather than $$\mathbb{C}$$.
Not sure I get this. Do you mean, for example, that for some vector $$v \in R^3$$, say, the image under the action of SU(2) will have "components" (unfortunate choice of words in this context!) wil be, say $$\begin{pmatrix}0 \\ i \end{pmatrix}v$$ and $$\begin{pmatrix}i \\ 0 \end{pmatrix}v$$? I'm not sure I like this.
Given the fundamental rep of su(2) you can construct :

$$J_{\pm} = \sigma^{1} \pm i \sigma^{2}$$
$$J_{3} = \sigma^{3}$$

Given the defining algebra of su(2) you find that $$[J_{\pm},J_{3}] = \pm J_{3}$$ and that $$[J_{\pm},J_{\mp}] = 0$$. From this you can construct a system of orthogonal 'integer vectors' of the form $$\left| \begin{array} n \\ 1-n \end{array} \right\rangle$$ for n=0,1.

This is because you either take the specific matrix representation in the 2x2 form of su(2) and see that's how the matrices work or you see that you can construct

$$|n\rangle$$ such that $$J_{3}|n\rangle = n$$ so $$J_{3J_{\pm}}|n\rangle = (n\pm 1)J_{\pm}|n\rangle$$ so you can say $$J_{\pm}|n\rangle \propto |n\pm 1\rangle$$, with $$\langle n |m \rangle \propto \delta_{nm}$$. The allowable range of n depends on the dimensional representation of the vector space. If you're using the fundamental 2x2 rep of su(2) then n=0, 1. If you're using the well known 3x3 rep of su(2) then you have that n=0,1,2. This is because you can also show that there's a max $$|m\rangle$$ such that tex]J_{+}|m\rangle=0[/tex] and a min $$|m'\rangle$$ such that tex]J_{-}|m'\rangle=0[/tex] (consider the eigenvalue of $$J_{3}$$.

It's from this we form the spin algebras because anything with the algebra $$[T_{i},T_{j}] = \epsilon_{ijk}T_{k}$$ is a rep of su(2).

I think I found another way to come at this, which I outline sketchily.

I worked out the algebra for SU(2) and foubd that
$$[X,Y] =-2Z$$
$$[Z,X] = -2Y$$
$$[Z,Y] = 2X$$

this suggested that my original bases were too big by a factor of 2
This is the literal algebra of su(2) but it's common place for physicists to use $$\sigma_{i} = -\frac{T_{i}}{2}$$ to get rid of the 2 on the right hand side. Also they 'complexify' an algebra sometimes via $$\sigma_{i} = -\frac{iT_{i}}{2}$$That way you get an i on the right hand side too. It's convenient for some systems.
Well, there's a bit more to it than that, but isn't that what "spin 1/2" means?
Yep. You can see that the t/2 in the specific matrix representation means that you have to rotate by 4pi to get the identity. This is the spinor equivalent of SO(n).
 
Last edited:
Alpha: Thanks for that, though much of it was couched in language I don't yet understand. Leave it with me a while.

temur: The simple argument that elements of the algebra u(n) must be real was given me as follows.

We require vectors in u(n) to be anti-hermitian, that is, if $$X \in u(n)$$ then $$X^{\dagger} = -X$$, where "dagger" is the conjugate transpose. Write this equivalently as $$X = -X^{\dagger}$$.

Then $$iX = -iX^{\dagger} = +(iX)^{\dagger}$$, so if $$X \in u(n),\; iX \notin u(n)$$, because we've killed anti-hermitianicity (is this a word?) so elements of u(n) are always real.
 
Alpha: Thanks for that, though much of it was couched in language I don't yet understand. Leave it with me a while.
As you've noticed with Ben, the terminology used by physicists is often a touch sloppy. I was a maths student before my PhD so I tried my best, but even the course on Lie algebras I took was essentially "Lie algebras for theoretical physicists", despite being taught in a maths department. Not to mention most of it didn't sink in till my PhD.

Just say which bits make you think "WTF is he on about?!" and I'll try to explain.
We require vectors in u(n) to be anti-hermitian
Just to explain to Temur where that comes from :

The Lie group U(n) is defined via $$U(n)U(n)^{\dag} = \mathbb{I}_{n}$$

To find the Lie algebra u(n) we find the tangent space of U(n) in the vacinity of the indentity, since that's what u(n) is defined as. This amounts to taking saying $$mm^{\dagger} = 1$$ is true for m in U(n). Let m become parameterised by some variable, say t, which is like picking a path through the Lie group. The tangent space is thus defined by taking the t derivative and putting in 't=0' with m(0) = 1, so we're expanding about the identity.

$$\frac{d}{dt}(m(t)m(t)^{\dag}) = \frac{d}{dt}1$$
$$m'(t)m(t)^{\dag} + m(t)m'(t)^{\dag} = 0$$
$$M.m(0)^{\dag} + m(0)M^{\dag} = 0$$

where M is in u(n). Since m(0) = 1, it's hermitian conjugate is also equal to 1, so we have $$M+M^{\dag} = 0$$

Thus the generators of u(n) are antihermitian.
temur: The simple argument that elements of the algebra u(n) must be real was given me as follows.
Your proof is wrong and I think you misunderstood what temur was referring to.

If $$A^{\dag} = -A$$ then it means A is entirely imaginary and antisymmetric. Complex matrices can be written as $$A = B+iC$$ where B and C have only real entries. If If $$A^{\dag} = -A$$ then it means seperately that $$B = B^{T}$$ and $$C = -C^{T}$$ because $$B+iC$$ = B^{T}-iC^{T}[/tex].

Therefore, just as if $$z^{\ast} = -z$$ means z = iy, if $$A = -A^{\dag}$$, then A = iC. It's just the matrix version.

To say something is 'real' in this context is equivalent to saying "\mathbb{R}^{n}[/tex] is a 3d vector space over the reals while \mathbb{R}^{n}[/tex] is a 3d vector space over the complex numbers". ie if $$v \in V$$ then $$v = v^{i}e_{i}$$ is such that $$v_{i} \in mathbb{R}$$. It's over the reals. $$w \in \mathbb{C}^{3}$$ is $$w = w^{i}e_{i}$$ with $$w^{i} in \mathbb{C}$$.

Hence you generate an element in su(2) via $$a \in su(2) \Rightarrow a = a^{i}\sigma_{i}$$ with $$a^{i} \in \mathbb{R}$$. Not $$a^{i} \in \in \mathbb{C}$$.$$$$
 
Just say which bits make you think "WTF is he on about?!"
Coming!!

Your proof is wrong and I think you misunderstood what temur was referring to.

If $$A^{\dag} = -A$$ then it means A is entirely imaginary and antisymmetric.
I can't convince myself this is correct. First I don't think that "anti-symmetric" and "anti-hermitian" are quite the same thing; a(n) (anti-)symmetric operator acts on a real space, a(n) (anti-)hermitian operator acts on a complex space. Both are characterized in the same way, but in a real space the conjugate in the definition of the hermitian is deemed redundant (obviously!). Therefore an anti-symmetric operator in a real space is anti-hermitian, but not v.v.

Second, the matrices $$\begin{pmatrix}0 & -1\\
1 & 0 \end{pmatrix}$$ and $$\begin{pmatrix}0 & 1 \\
-1 & 0 \end{pmatrix}$$ are most decidedly anti-hermitian (anti-symmetric), they are also most decidedly real. They are also traceless, and therefore eligible for consideration as a basis vector in su(2).

I challenge you to find an "entirely imaginary", traceless, antihermitian basis vector in su(2), other than $$\begin{pmatrix}0 & i\\
i & 0 \end{pmatrix}$$ and $$\begin{pmatrix}i & 0\\
0 & -i\end{pmatrix}$$

So here goes - WTF are you saying here?
To say something is 'real' in this context is equivalent to saying "$$\mathbb{R}^{n}$$ is a 3d vector space over the reals while $$\mathbb{R}^{n}$$ is a 3d vector space over the complex numbers".
I didn't get that argument - surely you're not denying that the reals are a subset of the complexes?
 
I can't convince myself this is correct. First I don't think that "anti-symmetric" and "anti-hermitian" are quite the same thing; a(n) (anti-)symmetric operator acts on a real space, a(n) (anti-)hermitian operator acts on a complex space. Both are characterized in the same way, but in a real space the conjugate in the definition of the hermitian is deemed redundant (obviously!). Therefore an anti-symmetric operator in a real space is anti-hermitian, but not v.v.
Sorry, you're right. Got my logic the wrong way around. I was thinking "Hermitian implies symmetric on the real part, antisymmetric on the imaginary" and then swapped the => to <= :eek:
I didn't get that argument - surely you're not denying that the reals are a subset of the complexes?
No, but scalar multiplication in a vector space is defined via F x V ->V for some F. If F=R then it's a real vector space, irrespective of how you represent V (as I said, you can always write a complex array in terms of a larger real one). Yes, R is a subfield of C but that doesn't mean I've working over the field C any more than saying F = Qmeans I work over the Reals.
 
Anti-Hermicity means that the matrices in su(n) are like purely imaginary numbers, but purely imaginary numbers are actually like reals, there is only "one direction". Is it the same thing as there is no complex structure on the manifold? So I have the feeling that this is what you mean by su(n) is real.
 
No, I don't think this is right, anti-hermenticity doesn't mean that the matrices are "purely imaginary.

Distinguish between "purely imaginary" and "complex". Since $$\mathbb{R} \subset \mathbb{C}$$ then $$x \in \mathbb{R} \in \mathbb{C}$$ iff $$x = x + i0$$. So x is complex in this sense, but it is most certainly not "purely imaginary".

This may seem like a piece of superfluous pedantry, until one realizes that the reals are not algebraically closed. That is, there are legitimate operations on the reals, sqrt for example, with non-real solutions.
 
OK, folks, looks like we're starting to motor here. Thank you all so much for your trouble!

So here are today's dumb questions, just like it said on the tin.

Alpha remarked that the fact that there is/are one basis vector for su(1), three for su(2), eight for su(3) implies the existence of 1 photon, 3 weak bosons and 8 gluons. In the case of su(2), for example, may I say that, since the Taylors for each basis vector implies spin-1/2 symmetry, then B, W and Z have spin-1/2 in different "directions", say x, y and z (or whatever permutation takes your fancy)?

Edit: auxiliary question. A friendly physicist once told me (on these pages, as it happens) that there is a theorem of ?Goldstein? that says something like; for every degree of freedom in my theory there is a boson. Is this at all relevant to the above?

Second, though I have no intention (unless bullied into it) of working out the algebra of SU(3), but I would be willing to guess that the Taylors would reveal a symmetry that is not like that of SU(1) or SU(2). Is this correct? What is it like?

Third, and this maybe related, Ben mentioned spin up/spin down. Where does this enter the picture?
 
Last edited:
Urgh. I promised I wouldn't work out the algebra for SU(3), but as I have had no takers so far, I messed around a bit.

First, noting that, due to the det = +1 constraint on U(n) which makes SU(n) "special", the basis for any algebra su(n) will have cardinality $$n^2-1$$. So I would look for 8 traceless, anti-hermitian matrices for my basis. Right? I could only find 7!!

Undaunted, I pressed on. Using the vectors I had, I worked out a handful of commutators (bearing in mind that there are $$8^2 - 8$$ of them - though due to skew symmetry I only (!!!) need to figure out half of them - this is not exactly laziness on my part).

The following pattern seemed to be emerging. I had four classes of Lie bracket in my algebra. Letting the $$X_i$$ be basis vectors in su(3), it looks like these classes will be

$$[.,.] = iX_h$$
$$[.,.] = -iX_j$$
$$[.,.] = 2iX_k$$
$$[.,.] = -2iX_l$$

Now WTF does this imply to a physicist? Obviously I haven't exponentiated, but, since I anticipate that exponentiation will look after the imaginary unit quite prettily, it looks to me as though this will turn out to mean that, of the 8 gluons I have been asking about, 2 will have U(1) symmetry, 2 will have U(1) anti-symmetry, 2 will have U(2) symmetry and 2 will have U(2) anti-symmetry.

Am I mad? Or merely mistaken?
 
Urgh. I promised I wouldn't work out the algebra for SU(3), but as I have had no takers so far, I messed around a bit.

First, noting that, due to the det = +1 constraint on U(n) which makes SU(n) "special", the basis for any algebra su(n) will have cardinality $$n^2-1$$. So I would look for 8 traceless, anti-hermitian matrices for my basis. Right? I could only find 7!!

Try doing obscene things with the Pauli matrices. The eighth one is a bit tricksy.

Now WTF does this imply to a physicist? Obviously I haven't exponentiated, but, since I anticipate that exponentiation will look after the imaginary unit quite prettily, it looks to me as though this will turn out to mean that, of the 8 gluons I have been asking about, 2 will have U(1) symmetry, 2 will have U(1) anti-symmetry, 2 will have U(2) symmetry and 2 will have U(2) anti-symmetry.

Am I mad? Or merely mistaken?

Well...I wonder if what you said is related to the fact that there is a Cartan sub-algebra of dimension 2 for SU(3)?
 
Alpha remarked that the fact that there is/are one basis vector for su(1), three for su(2), eight for su(3) implies the existence of 1 photon, 3 weak bosons and 8 gluons. In the case of su(2), for example, may I say that, since the Taylors for each basis vector implies spin-1/2 symmetry, then B, W and Z have spin-1/2 in different "directions", say x, y and z (or whatever permutation takes your fancy)?

Hmm....I am confused. What is a ``Taylor''?
 
Hmm....I am confused. What is a ``Taylor''?
Yes, sorry, that was very sloppy on my part. I had meant the Taylor power series expansion, but I don't think I should have said that; I should have said the exponential map., though they are pretty much the same thing. I can take you through it when I get a little more time if you like (but I feel sure you shan't)

Try doing obscene things with the Pauli matrices. The eighth one is a bit tricksy.
OK. I did do obscene things with these guys, some of them unnatural and some of them positively bestial. To no avail. But the eighth one always eluded my bestial advances!

I wonder if what you said is related to the fact that there is a Cartan sub-algebra of dimension 2 for SU(3)?
What's a Cartan sub-algebra?
 
You choose some basis and associate gluons to the basis elements, what makes this basis so special? Isn't everything supposed to not depend on basis?
 
What's a Cartan sub-algebra?

Well, there are eight generators of su(3) (or basis vectors, as you like). The Cartan sub-algebra is the set of those generators which are mutually commuting---i.e. their Lie brackets are zero. It turns out that, for reasons which I couldn't begin to tell you about but probably knew at some point, the CSA of su(N) always has dimension N-1. This should help you solve your su(3) basis problem, as you should find two matrices which are mutually commuting, usually taken to be the diagonal ones.

OK. I did do obscene things with these guys, some of them unnatural and some of them positively bestial. To no avail. But the eighth one always eluded my bestial advances!

Also, if you get too stuck, you can Wikipedia ``Gell-Mann matrices'' and find the answer.

You choose some basis and associate gluons to the basis elements, what makes this basis so special? Isn't everything supposed to not depend on basis?

Well, the gluons themselves can be thought of AS the basis elements. When we pick a basis to do calculations in, the results should be independant of how we chose to represent them. There is a common convention (Gell-Mann matrices or Gell-Mann basis), but you're not bound by that unless you want other physicists to cite your work.
 
OK, I looked up the Gel-Mann matrices. No wonder I couldn't find the elusive eighth basis. I have no idea how he came up with that, but it does have the required properties of tracelessness and anti-hemiticty. The others were as I had found them to be. I had also already noticed that the Pauli matrices were somehow "contained" in the SU(3) basis.

Hurrah for me!

Moreover, although I didn't do an exhaustive computation I did find that what the article calls $$\lambda_3$$ (I called it E), commutes with the $$\lambda_8$$. Maybe others do too, I dunno.

Anyway, I now see almost all of my last post was complete crap - though I do stick by my commutation relations. I don't need to fret about the odd factor of plus or minus 2 in these relations, in which case the symmetry of SU(3) will be 2 pi. This of course implies spin-1, just as the doctor ordered!

While I am here, let me make a couple of related points. They talk about "8 gluons", these being the basis for the algebra su(3). But Lie theory informs us that all Lie brackets are vectors, and these can be added and scaled. This seems to imply a possible countable infinity of gluons. Or do I mean gluon-gluon interactions?

Or am I talking gibberish again? Please help.

They also talk about "colour charge" whatever that might mean. Nevertheless, the perfect anti-symmetry relations I found in the commutators suggests that, for each "colour", there can be an "anti-colour". Moreover, the vanishing commutator I referred to above implies the existence of a "non-colour". This all fits in with what I have just been reading.

How very nice - I rather like it! (Of course, I may be talking out of my ear, or worse!)
 
I had also already noticed that the Pauli matrices were somehow "contained" in the SU(3) basis.

I think that maybe this is a consequence of some work by Cartan and Dynkin. But I don't know.

Moreover, although I didn't do an exhaustive computation I did find that what the article calls $$\lambda_3$$ (I called it E), commutes with the $$\lambda_8$$. Maybe others do too, I dunno.

No those are unique---those are the two elements of the Cartan Sub-Algebra.

Anyway, I now see almost all of my last post was complete crap - though I do stick by my commutation relations. I don't need to fret about the odd factor of plus or minus 2 in these relations, in which case the symmetry of SU(3) will be 2 pi. This of course implies spin-1, just as the doctor ordered!

I'm a bit confused. What do you mean by ``the symmetry of SU(3) will be 2 pi''?

While I am here, let me make a couple of related points. They talk about "8 gluons", these being the basis for the algebra su(3). But Lie theory informs us that all Lie brackets are vectors, and these can be added and scaled. This seems to imply a possible countable infinity of gluons. Or do I mean gluon-gluon interactions?

Sure but the BASIS is always the same, right? You can always write the Cartesian unit vectors $$\hat{i}, \hat{j}, \hat{k}$$---my understanding of the generators (or ``basis'') of su(3) are that they are just unit vectors in some lattice, which tell you how to jump from one point to the next.

They also talk about "colour charge" whatever that might mean. Nevertheless, the perfect anti-symmetry relations I found in the commutators suggests that, for each "colour", there can be an "anti-colour". Moreover, the vanishing commutator I referred to above implies the existence of a "non-colour". This all fits in with what I have just been reading.

How very nice - I rather like it! (Of course, I may be talking out of my ear, or worse!)

Now the fun begins. The set of 3x3 matrices which form a basis for the algebra are called the ``adjoint'' representation. The 3x3 matrices act on a set of 3 component vectors called the ``fundamental'' representation. I think it's called the ``fundamental'' rep because you can make tensor products and build any other rep out of it---if you know anything about Young's Tableaux (Alfred Young was a countryman of yours) then this is a cute little exercize to build a few other representations.
 
Last edited:
Ben: I thank for your responses, truly I do. However, I think we are drifting slightly off the point of this thread.

Yes, I understand the Lie groups and their algebras moderately well. My whole point, I guess, can be summarized thusly:

What is the justification that physicists give for asserting that, given the Lie groups and their algebras that we have been discussing, then this of necessity implies the existence of photons, weakly-interacting bosons and gluons?

Alpha associated each of these to the bases for su(1), su(2) and su(3), respectively, and certainly the numbers stack up.

But what was the justification for this?

Or is it the other way round? id est assume the existence of these particles, and then find a piece of mathematics to describe them?

In short, I am confused (what's new?)
 
Back
Top