The QH QM QA thread.

Oh, clever, clever. Let's make a start
I've been reading about this over the last few days and chatted a bit about it with a friend who likes to read similar textbooks on diff geom and is much better at them then me (Ben, I'm refering to 'Euler' over on PhysOrg).

Given a principle bundle $$P \to^{\pi} M$$ with fibre G (with Lie algebra $$\mathcal{L}(G)$$, we can define a connection $$A$$ which is the usual union of A_i over patches etc etc. TP we can split into two sections, the 'vertical', VP such that $$\pi(X^{V}) = 0$$ for $$X^{V} \in VP$$ and $$X^{H} \in HP$$ can then either be defined by $$X^{H} = X-X^{V}$$ or .... or...... I forget. Suffice to say, there's a way of conceptualising TP = HP + VP, a decomposition of the tangent space of the bundle. But how can we manipulate vectors specifically?

The connection projects any vector in TP down in VP, so $$A_{i} \equiv \sigma_{i}^{\ast}\omega \in \mathcal{L}(G) \otimes \Omega^{1}(U_{i})$$. You can then talk about transformations on the A_i, as we've done in the past.

The spiffy thing is that then you can then define the curvature of the bundle, $$\Omega = D\omega \in \Omega^{2}(P) \otimes g$$, which comes down to $$\Omega = d\omega+\omega \wedge \omega$$, since $$D = d + \[\omega, \bullet \]$$.

We can then map back to A's and F's (rather than omegas and Omegas) via the pullback $$\sigma^{\ast}$$, $$F = \sigma^{\ast}(\Omega)$$.

The Bianchi identity, which is true for all non-pathological principle bundles (this is a subtle way of saying "I define pathological to be any example where this statement fails", a not uncommon 'defintion' in mathematics ;)) is that $$D\Omega =0$$, so doing our pullbacked goodness we get that

$$\mathcal{D}F = dF + [A,F] = 0$$

Putting in coordinates/charts/etc, we get ( using $$X_{[a}Y_{b]} = \frac{1}{2!}( X_{a}Y_{b} - X_{b}Y_{a})$$ etc,

$$D_{[\mu}F_{\nu\lambda]} = \partial_{[\mu}F_{\nu\lambda]} + [A_{[\mu},F_{\nu\lambda]}] = 0$$

But these are g values and so putting in $$T^{a}$$ generators,

$$D_{[\mu}F_{\nu\lambda]}^{a} = \partial_{[\mu}F_{\nu\lambda]}^{a} + A_{[\mu}^{b}F_{\nu\lambda]}^{c}f^{a}_{bc} = 0$$

Take f=0 for G=U(1) and so $$ \partial_{[\mu}F_{\nu\lambda]}^{a} = 0$$

Pure and funky differential geometry. Once again, electromagnetism is the result of the simplest non-trivial principle bundle our space-time can possibly be the base space for!
This makes sense, if true, since we may assume that $$V$$ depends on the spatial $$x_i$$ and (usually?) not at all on the $$\frac{dx_i}{dt}$$.
I'm 99.9% sure that V cannot involve $$\partial_{t}$$ by definition, since such a factor is something an entity would have by virtue of it's motion.

I think you can show that by doing the Euler Lagrange equations, since they treat terms involving time derivatives seperately. Infact, you can go, algorithmically, from L = T-V to H=T+V. You can go from H to L to H using the Euler-Lagrange equations and the Hamiltonian equations, which are derived in much the same way from H. The Hamiltonian is sometimes more convenient since $$\hat{H}|\phi\rangle = E_{\phi}|\phi\rangle$$ and if you're doing computational things over N q's (so you have 2N variables when you include $$\dot{q}$$), you can either work with N second order PDEs from L or you can work with 2N first order you get from H. For something like a ball moving under gravity which amounts to x'' = -g, a 2nd order PDE, no problem. Doing it for 3 complex fields, so you end up with 12 variables and horrific couplings between them, 1st order is the way to go.

Until you overload Mathematica's ability to do precise numerical calculations and smooth curves to go random noise.... :mad:
 
Last edited:
TP we can split into two sections, the 'vertical', VP such that $$\pi(X^{V}) = 0$$ for $$X^{V} \in VP$$ and $$X^{H} \in HP$$ can then either be defined by $$X^{H} = X-X^{V}$$ or .... or...... I forget. Suffice to say, there's a way of conceptualising TP = HP + VP, a decomposition of the tangent space of the bundle. But how can we manipulate vectors specifically?

The connection projects any vector in TP down in VP, so $$A_{i} \equiv \sigma_{i}^{\ast}\omega \in \mathcal{L}(G) \otimes \Omega^{1}(U_{i})$$. You can then talk about transformations on the A_i, as we've done in the past.
OK, I have a few questions/observations.

My Lie text gives two definitions for a connection on a PB, which it claims are equivalent. The first goes like this: a connection on a PB is a smooth assignment of a subspace $$H_xP \subset T_xP$$, the horizontal subspace such that $$T_xP =H_xP \oplus V_xP$$ and $$H_{xg}P= R_gH_xP$$ where g is a Lie group element and the R denotes the group(right) action

This is nice. It has the totally intuitive interpretation that, if vectors in the vertical subspace "point along" the fibres, then those in the horizontal subspace "point between" them, and we can roam over our PB by the group (right) action.

The second definition goes: a connection form on a PB is a smooth mapping $$\omega : T_xP \to \mathfrak{G}$$ (the latter is my algebra) with the same group action compatibility requirement, and then $$H_xP$$ is the kernal of the mapping $$\omega(x): T_xP \to \mathfrak{G} \approx V_xP$$.



The spiffy thing is that then you can then define the curvature of the bundle, $$\Omega = D\omega \in \Omega^{2}(P) \otimes g$$, which comes down to $$\Omega = d\omega+\omega \wedge \omega$$, since $$D = d + \[\omega, \bullet \]$$.
Surely this is the curvature of the connection?

We can then map back to A's and F's (rather than omegas and Omegas) via the pullback $$\sigma^{\ast}$$, $$F = \sigma^{\ast}(\Omega)$$.
Right, though I think you want to pullback the connection, rather than the curvature? Oh, I see $$\omega \to A_{\mu},\; \Omega \to F_{\mu \nu}$$? The Faraday tensor is curvature?

Anyway, the question arises: unless our PB is globally trivial, the pullback of the connection by the section can only give local $$\mathfrak{G}$$-valued representatives $$\omega'_\alpha = \sigma* \omega$$ on each $$U_\alpha \subset M$$ in the cover. It looks like we need a way of moving from $$U_\alpha \to U_\beta$$ using G-valued transition functions. I guess that no real sweat though


$$D_{[\mu}F_{\nu\lambda]} = \partial_{[\mu}F_{\nu\lambda]} + [A_{[\mu},F_{\nu\lambda]}] = 0$$

But these are g values and so putting in $$T^{a}$$ generators,
Hmm, is the above what you addressing here? What are g values? Ah, you meant valued, right? Still not sure I get it.

Incidentally, although I have seen your notation for cyclic permutations used before, it makes me feel sick!
 
Last edited:
My Lie text gives two definitions for a connection on a PB, which it claims are equivalent. The first goes like this: a connection on a PB is a smooth assignment of a subspace $$H_xP \subset T_xP$$, the horizontal subspace such that $$T_xP =H_xP \oplus V_xP$$ and $$H_{xg}P= R_gH_xP$$ where g is a Lie group element and the R denotes the group(right) action
I've not seen that definition before but it's a very nice one.

Particularly if your base manifold is the fibre group!
Surely this is the curvature of the connection?
It's certainly the covariant derivative of the connection but I don't know if it's actually 'the curvature of the connection'. Curvature, in the relativity sense, involves derivatives of the metric, but we don't call it the curvature of the metric but more like the curvature of the manifold. I guess it's a matter of semantics somewhat.

I'm only just getting to grips with this myself. It has made a bunch of papers my supervisor asked me to read (half of them written by her!) on Calabi Yau manifolds a lot more obvious. They keep talking about bundles and saying "curvature" but not refering to a metric. Now I realise they mean this kind of curvature. One of the critical results, which lead to the SO(32) and $$E_{8} \times E_{8}$$ string theories, is that the curvatures of a bundle, ie $$F_{\mu \nu}$$ and the curvature of the base manifold $$R$$ had to be balanced and it's only true when the group has dimension 496. Not only did I not understand the curvature concept, I didn't understand how the group could alter it. Now I get it :)

So don't feel like you're the only one stumbling through the dark like a drunken Scotsman Quarkhead, we all are to some degree or other :p
ight, though I think you want to pullback the connection, rather than the curvature?
Can do it either way because $$\sigma^{\ast}(d_{P}\omega + \omega \wedge \omega) = d(\sigma^{\ast}\omega) + \sigma^{\ast}(\omega) \wedge \sigma^{\ast}(\omega)$$.

You can do it before or after, just remember what the 'd' means in each case since there's a slight difference. This is in Chapter 10 of Nakahara. I'll sort out sending it to you tomorrow.
The Faraday tensor is curvature?
Yep :)

Might be another case of semantics, but isn't it the Maxwell tensor?
It looks like we need a way of moving from using G-valued transition functions.
$$omega_{i}$$ is defined from $$U_{i}$$, then you have that $$\omega_{j} = t_{ij}^{-1}\omega_{i} \, t_{ij} + t^{-1}_{ij}dt_{ij}$$. This is a form familiar to anyone who has done non-commuting QFT, but I realise that doesn't include you. But the physics approach can give a nice motivation for this.

Pure Yang Mills gauge theory has the Lagrangian (up to a multiplicative constant) $$\int F_{\mu\nu}F^{\mu\nu}$$

But if we're working over a non-trivial gauge group we change this to $$\int \textrm{Tr}\left( F^{a}_{\mu\nu} F^{b}^{\, \mu\nu} T^{a} T^{b}\right)$$

Or more simply

$$\int \textrm{Tr}\left( F_{\mu\nu} F^{\mu\nu} \right)$$

We need to be sure that going from U_i to U_j doesn't screw this up. Well a straight forward thing to have is then that $$t_{ij} : U_{i} \to U_{j}$$ such that $$F \to t_{ij}F\, t^{-i}_{ij}$$. Traces are cyclic and so

$$\int \textrm{Tr}\left( F_{\mu\nu} F^{\mu\nu}\right) \to \int \textrm{Tr}\left( t_{ij}\, F^{a}_{\mu\nu} F^{ \mu\nu} t_{ij}^{-1}\right) = \int \textrm{Tr}\left( F_{\mu\nu} F^{\mu\nu}\right) $$

Again, the pdf I linked to a page or two ago with the SU(N) gauge beta function stuff does this in explicit detail.
Hmm, is the above what you addressing here? What are g values? Ah, you meant valued, right? Still not sure I get it.
Yes, I meant to say "g valued" and 'g' is the little gothic symbol for the Lie algebra. You're using a curly G and I usually use $$\mathcal{L}(G)$$. That's not a convenient notation to say 'g valued' though...

And yes, notation on indices is nasty. Antisymmeterising over a bunch of indices, some of which belong to antisymmetric tensors themselves, some of which are contracted etc. It's very unpleasant.

That's why p-form notation is so much better when you don't put in the indices, ie F = dA, dF = [F,A] etc.
 
OK. Here is this week's dumb question. And when I say dumb, I mean really, really dumb. It is a far cry from what we have been discussing, but, there you go.....

At some point in my reading last week I encountered the the terms "momentum operator" and "position operator". This is baffling me!

Ordinarily, I would like to think of momentum and position as vectors (given the appropriate choice of coordinates).

Equally ordinarily, I would like to think of an "operator" as a linear transformation on a vector space.

Now, I have the equation for the momentum operator scribbled in front of me. I notice there is no mass term! So what gives? Obviously the "momentum operator" is not the same as "momentum". What is it doing, in simplistic terms? Likewise the position operator?
 
Aren't they just a way of getting Heisenberg into the math? I think if you specify position and momentum operators they need to be conjugate (not sure if they have to be dimensionless).
I think you might mean you have "a" momentum operator in front of you, since a particular operator would depend on the state space...?

P.S. this is probably a dumb answer.

P.P.S. I probably mean it's a way of making sure Heisenberg is in there, by deriving or defining operators appropriately. I found a clue about how "position- and momentum-like operators are defined through the Cartesian decomposition of the mode operators" in a Hilbert space.
Also I think I'm actually talking about phase space, oh bugger.
 
Last edited:
OK now I'm confused about phase spaces.
In a generalised space, two quantities determine "complete" information of the dynamics. Momentum and position, I see described in an article by Martin Gutzwiller about Rydberg states, as "the product of mass and velocity", and "the strength and direction of the force", respectively.

Isn't he describing momentum twice, instead of giving an intuitive description of position?
Edit: no, of course the position (vector) is a time derivative, and momentum is a product, natch.

The idea with a generalised phase space appears to be the application of various mathematical tricks to reduce the 3 + 3 dimensions to a more manageable number; to "slice" the problem or the space, so to speak.
 
Last edited:
OK. Here is this week's dumb question. And when I say dumb, I mean really, really dumb. It is a far cry from what we have been discussing, but, there you go.....

At some point in my reading last week I encountered the the terms "momentum operator" and "position operator". This is baffling me!

Ordinarily, I would like to think of momentum and position as vectors (given the appropriate choice of coordinates).

Equally ordinarily, I would like to think of an "operator" as a linear transformation on a vector space.

Now, I have the equation for the momentum operator scribbled in front of me. I notice there is no mass term! So what gives? Obviously the "momentum operator" is not the same as "momentum". What is it doing, in simplistic terms? Likewise the position operator?


How much CLASSICAL mechanics do you know?
 
OK. Here is this week's dumb question. And when I say dumb, I mean really, really dumb. It is a far cry from what we have been discussing, but, there you go.....

At some point in my reading last week I encountered the the terms "momentum operator" and "position operator". This is baffling me!

Ordinarily, I would like to think of momentum and position as vectors (given the appropriate choice of coordinates).

Equally ordinarily, I would like to think of an "operator" as a linear transformation on a vector space.

Now, I have the equation for the momentum operator scribbled in front of me. I notice there is no mass term! So what gives? Obviously the "momentum operator" is not the same as "momentum". What is it doing, in simplistic terms? Likewise the position operator?

$$\mathbf P = \frac {h}{i}\mathbf \nabla \cdot$$

The momentum operator is applied to a dimensionless wave function $$\psi(\mathbf x,t)$$. The gradient of the probability distribution thus has units 1/length. Planck's constant has units of energy multiplied by time = momentum multiplied by distance and i is unitless. Putting it all together, the momentum operator has units of momentum.

There is no uncertainty (other than measurement error) in classical physics. Quantum mechanics deals with things that have some inherent uncertainty in them. A simple rule for generating a quantum law of physics from a classical law of physics to replace the classical variables in the classical law with the corresponding operators (operating on the wave equation).
 
Ben: I know how to strip down a gear-box, does that count? Otherwise it would be fair to say I know more Mandarin than classical mechanics in physics.

Of course I know the Laws of Newton, and (just to please you) I spent some time reading up on Lagrangian mechanics, which, again, in terms of Newton, I am now completely at home with.

Ah... you are not hinting, I hope, that if I don't know classical mechanics, it is presumptuous of me to ask about quantum mechanics? This would be a very fair point.....

{the momentum operator is} $$\mathbf P = \frac {h}{i}\mathbf \nabla \cdot$$

D H I thank you, but the QM momentum operator I have scribbled is $$\vec{p} = -i \hbar \nabla = -i \frac{h}{2 \pi} \frac{\partial}{\partial x^i}$$. I cannot square this with what you wrote.

Planck's constant has units of energy multiplied by time = momentum multiplied by distance and i is unitless. Putting it all together, the momentum operator has units of momentum.
I don't doubt you are correct. I find this hard to follow, however. May I prevail upon you to break this down a little?
 
Ok, I know I have failed at this more than once, but here I go anyway!

Classically, you can always write a function called a Hamiltonian, which is the sum of the kinetic and potential energies of a particle. For example, suppose we have a particle at some height x in a gravitational potential. Easy. The kinetic energy is $$T = \frac{1}{2} mv^2$$ and the potential energy is $$V = mgx$$. Then we can write the Hamiltonian as

$$\mathcal{H} = T + V = \frac{1}{2}mv^2 + mgx$$.

Now, you realize that you have two variables here, v and x. But wait, they're related by a derivative, so you define

$$v \equiv \frac{dx}{dt} \equiv \dot{x}$$

and you have

$$\mathcal{H} = \frac{1}{2}m\dot{x}^2 + mgx$$.

But we're not quite done yet. For technical reasons, that AlphaNumeric can probably explain better than me, we want to express the Hamiltonian only in terms of momenta and coordinates. This means we need to make the substitution

$$\frac{1}{2} m \dot{x}^2 = \frac{p^2}{2m}$$

This gives us out official Hamiltonian:

$$\mathcal{H} = \frac{p^2}{2m} + mgx$$.

This function encodes all of the information that you ever want to know about the particle---this is wny Newton thought that physics (and thus the universe) were completely deterministic. I give you T, I give you V, and you know everything there is to know.

``Knowing everything'' means that you know the equations of motion---that is, you can solve for the position and the momentum (which, remember, are functions of time) of a particle at any time t, given a set of initial conditions. The equations of motion tell you precisely what the particle will do. Now I should tell you how to find these equations of motion. In general, if you have n corrdinates (n=1 for us), you will have 2n first order differential equations.

In general, the equations of motion are given by

$$\frac{dp}{dt} = -\frac{\partial \mathcal{H}}{\partial x}$$,

$$\frac{dx}{dt} = \frac{\partial \mathcal{H}}{\partial p}$$,

which tranlate, for our example, to

$$\frac{dp}{dt} = -mg$$,

$$\frac{dx}{dt} = \frac{p}{m}$$.

If you were paying attention in Kindergarten, you will notice that the first equation is exactly Newton's second Law (F = mg) and the second equation tells you how velocity and momentum are related. Great! Consistency.

Now, a useful thing to define classically is the Poisson bracket. There are some higher maths associated with what they actually mean (check the Wikipedia entry, or perhaps AN knows some more about this), but let's just suffice it to say that we can define the Poisson brackets { , } as

$$\left\{f,g\right\} = \frac{\partial f}{\partial x}\frac{\partial g}{\partial p} - \frac{\partial g}{\partial x}\frac{\partial f}{\partial p}.$$

where f and g are functions of p and x. Basically, all possible configurations of p and x form some sort of special manifold, and the Poisson bracket is some special operation on that manifold.

You should be able to see that

$$\left\{x,p\right\} = 1$$.

If you have a bunch of x's and p's (i.e. a bunch of different particles, or one particle in a bunch of different dimensions), you end up with

$$\left\{x_i,p_j\right\} = \delta_{ij}$$,

where the RHS turned into the Kronecker delta.

Ok, up to now it's been poorly motivated definitions. Sorry. The goal is to understand how in the hell we're supposed to quantize this. And in hindsight, I could have just jumped to this point in this long missive and just given you the answer!

Either way, I have to do some grading right now. I'll follow this up directly.
 
Last edited:
D H I thank you, but the QM momentum operator I have scribbled is $$\vec{p} = -i \hbar \nabla = -i \frac{h}{2 \pi} \frac{\partial}{\partial x^i}$$
Oops. That should have been hbar, not h. Off by a factor of $$2\pi$$.
$${\mathbf P} = \frac {\hbar}{i}\mathbf \nabla$$

Now we have the same expression.
 
D H: Of course, $$\frac{1}{i} = -i$$! Just remember: to err is human, to umm is divine.

Ben: Thanks for that little tutorial, I really enjoyed it. Don't think you started too far "back" - you didn't. Please continue, but first a comment or two....

your Poisson bracket has a remarkable resemblance to the Lie bracket. I take it this is no coincidence?

given your definition, I don't quite see that $$\{x,p\} = 1$$. Sorry to be dense (no, I won't look at the Wiki, I never do when a real human person is explaining things to me - it seems strangely discourteous)
 
Ben: Thanks for that little tutorial, I really enjoyed it. Don't think you started too far "back" - you didn't. Please continue, but first a comment or two....

your Poisson bracket has a remarkable resemblance to the Lie bracket. I take it this is no coincidence?

I don't think so. There's some fancy math language here that I will screw up, so let me look it up when I get to my office.

given your definition, I don't quite see that $$\{x,p\} = 1$$. Sorry to be dense (no, I won't look at the Wiki, I never do when a real human person is explaining things to me - it seems strangely discourteous)

Ok.

$$\left\{x,p\right\} = \frac{\partial x}{\partial x}\frac{\partial p}{\partial p} - \frac{\partial p}{\partial x}\frac{\partial x}{\partial p} = 1 - 0.$$

The independant degrees of freedom are taken to be the positions (x) and momenta (p) of the particles. That is, x and p are only functions of time, and not of each other, in general.

More to come!
 
That "momentum operator" is actually the canonical form?
I've seen various "momentum operators" in different papers, so is it not the case that an operator, or its non-canonical version, would depend on what you want to analyse?
 
OK, thanks Ben, though it seems I have to forgive you a typo - which I do "right readily"!
$$\left\{f,g\right\} = \frac{\partial f}{\partial x}\frac{\partial g}{\partial p} - \frac{\partial g}{\partial p}\frac{\partial f}{\partial x}.$$
should be $$\left\{f,g\right\} = \frac{\partial f}{\partial x}\frac{\partial g}{\partial p} - \frac{\partial g}{\partial x}\frac{\partial f}{\partial p}$$. Right? Otherwise it seems $$\frac{\partial x}{\partial x} \frac{\partial p}{\partial p} - \frac{\partial p}{\partial x} \frac {\partial x}{\partial p}= 1-0$$ is inconsistent notation.

More to come!
Promises, promises...

Surely you're not going to claim you have a life to lead?
 
OK, thanks Ben, though it seems I have to forgive you a typo - which I do "right readily"! should be $$\left\{f,g\right\} = \frac{\partial f}{\partial x}\frac{\partial g}{\partial p} - \frac{\partial g}{\partial x}\frac{\partial f}{\partial p}$$. Right? Otherwise it seems $$\frac{\partial x}{\partial x} \frac{\partial p}{\partial p} - \frac{\partial p}{\partial x} \frac {\partial x}{\partial p}= 1-0$$ is inconsistent notation.

Promises, promises...

Surely you're not going to claim you have a life to lead?

Absolutely!!!

Typo indeed.

Sorry for the delay---I'm trying to learn a Mathematica package which will do all of these tedious supergravity calculations for me!

I'll finish what I was trying to get at later, indeed!
 
Here's this '76 paper about Dirac-Pauli electrons and stuff, and they start out with a "quick" review of a couple things, including electrodynamics.
I think they are essentially describing a fibre bundle and the same kind of math, but it predates perhaps, the modern lingo? They discuss a hyperspace, tangent space and pullback, etc.
Maybe someone who can translate it better than me can verify that they are actually talking about the same stuff - now known as a tangent bundle and a total space, curvature and all?
They claim to derive a better result for the classical magnetic moment, so do they? I mean did they, and is their result now accepted as the correct derivation?
http://archive.numdam.org/ARCHIVE/AIHPA/AIHPA_1976__25_4/AIHPA_1976__25_4_345_0/AIHPA_1976__25_4_345_0.pdf
 
Vkothii: Your link stalled my computer, not your fault of course.

Anyhow, it seems the big guns have lost interest in this thread. Shame really, but there you go - I knew I wasn't up to the task.

Tell you what, I used to think that math was hard, and that, in comparison, physics must be easy, since physicists "stole" and then abused mathematics.

How wrong was I? Answer = totally.
 
Last edited:
Anyhow, it seems the big guns have lost interest in this thread

Not that I consider myself a big gun, but no way!

It's been a busy weekend, and I have finals to grade.

More soon!!!
 
Pooh! Who needs tutors? Looking in my yellowing college Phys. Chem. text, and almost pulling my beard out, I think I see where we're headed.

So let's start with the time-independent Schroedinger wave function:

$$(-\frac{\hbar ^2}{2m} \nabla ^2 + V)\psi = \epsilon \psi$$ where the $$\epsilon $$ is interpreted as the eigenvalue for the operator $$\mathcal{H}=-\frac{\hbar ^2}{2 m} \nabla ^2 + V $$. This is, of course, the Hamiltonian operator. Then $$\mathcal{H} \psi = \epsilon \psi$$, a very familiar equality.

As is customary, since the Hamiltonian is defined as $$\mathcal{H} =$$ kinetic + potential, one thinks of the quadratic term, i.e. the first, as kinetic energy of a particle with mass = m. Write $$\text{k.e}=\frac{1}{2}mv^2 = \frac{(mv)^2}{2m}$$.

Then comparing this to the kinetic (i.e. first) term in the Hamiltonian, I deduce that $$\frac{(mv)^2}{2m} = -\frac{\hbar ^2}{2 m} \nabla ^2 $$ and so that $$mv = \frac{\hbar}{i} \nabla ^2 \equiv \vec{p} $$.

Um...I think I may have a sign wrong somewhere. Oh, I know, I'll take the mathematicians way out - the above equality holds up to sign!!
 
Back
Top