The question of multiple dimensions in quantum mechanics:

Surely a simplification, when it is well known that when used to describe larger atoms Schrodinger's equation becomes so bulky and unwieldy as to be unusable.
"Unusable" is a simplification. Yes, it becomes bulky and unwieldy, but various approximations and simplifications can be made that make various problems solvable to a high degree of accuracy.
What is the point in describing one aspect and not the other, it (Schrodinger's equation) is also inaccurate when describing larger atoms.
Technically, Schrodinger's equation is inaccurate even for describing the hydrogen atom. As I said previously, it's a non-relativistic equation, for starters, so we know from the start that it's not going to be an exact description. But that doesn't make it useless. Far from it. Approximations are made in science all the time. Nature is complicated and sometimes it takes simple models to start to understand it. (Not that I'd call Schrodinger wave mechanics "simple", but it can get a lot worse, believe me.)
Finally let me state that maths is a language like any other and like any other language it [maths] can say things that are not true. When a statement like 'Pigs can fly." is made it does not mean that, just because it has been said it is true.
Agreed.
 
"Unusable" is a simplification. Yes, it becomes bulky and unwieldy, but various approximations and simplifications can be made that make various problems solvable to a high degree of accuracy.

Technically, Schrodinger's equation is inaccurate even for describing the hydrogen atom. As I said previously, it's a non-relativistic equation, for starters, so we know from the start that it's not going to be an exact description. But that doesn't make it useless. Far from it. Approximations are made in science all the time. Nature is complicated and sometimes it takes simple models to start to understand it. (Not that I'd call Schrodinger wave mechanics "simple", but it can get a lot worse, believe me.)

Agreed.
Oh, for the days when two plus two equaled four! Although caught up in the mists of high logic and mathematical introspection, it might not be apparent that such arguments are very similar to what one finds in any nursery school, where "it can" and "it can't" can go on forever with neither side giving in.
 
I repeat: the multiverse idea is not part of QM. It is by no means "a direct consequence of the wave function" as you put it. That is a complete misunderstanding on your part.

Are you confusing it with the "Many Worlds" interpretation of QM, perhaps? That, too, is not part of the theory of QM, but is a metaphysical speculative interpretation, one of many, about what QM may be saying about the nature of the world. It has its supporters and detractors - not to mention those who are utterly indifferent to it - but as it makes no testable predictions it is not part of science.
The scientists quoted , seem to think it ( the many world interpretation) is very much a part of modern science, which is what matters leaving aside nomenclature. Since you seem to be such an expert, could you answer the simple question of how light propagates from point A to point B and what happens in between? Don't refer to Maxwell.
 
The standard (Copenhagen) interpretation of quantum mechanics says that photons can be described as probability waves (roughly speaking). So, light propagates from A to B as a wave. When something measures the light, the wavefunction "collapses" and a photon will be observed (e.g. absorbed by a detector of some kind) at a particular point in space. The probability that the photon will be observed at different points is determined by the wavefunction at each point.

The many world interpretation says something a little different. It says that each time there is a "measurement" (e.g. of the position of the photon) the universe splits into multiple universes. In one universe, observers will see the photon at one point. In another universe they will see it at a different point, and so on for all the possibilities of where the photon might be observed.

Importantly, there is no experimental test that can determine whether the Copenhagen interpretation or the many worlds interpretation (or some other interpretation) of quantum mechanics is the "reality".
 
The scientists quoted , seem to think it ( the many world interpretation) is very much a part of modern science, which is what matters leaving aside nomenclature. Since you seem to be such an expert, could you answer the simple question of how light propagates from point A to point B and what happens in between? Don't refer to Maxwell.
Oh I'm far from expert. I'm not even a physicist, just a chemist. But as a chemist I worked with QM and its implications for the interactions of atoms and molecules every single day of my university studies, so I'm pretty familiar with those aspects of quantum theory. Much later, in the 90s, there was a time when Jim Baggott (https://en.wikipedia.org/wiki/Jim_Baggott ) and I worked together at Shell and we used sometimes to argue about interpretations of QM in the pub. He handed out copies of his book "The Meaning of Quantum Theory" when it came out in 1992. I have an autographed copy from him. I admit my stance on such questions is coloured by those discussions.:)

If you read the link you will see Baggott shares my view (or, rather, I share his) that the Multiverse idea is not science, as it makes no testable predictions. It is one of those metaphysical speculations that he describes as "fairytale physics". People like Massimo Pigliucci and Peter Woit, both of whom I regard as particularly clear thinkers with functioning bullshit detectors, seem to take much the same view. So Jim is in good company. There is no shame in indulging in metaphysical speculation. Einstein did it too. But one has to realise that this shades off into philosophy rather than science, once one starts elaborating concepts that do not lead to any testable consequences.

There are many, many physicists who dismiss or disfavour the Many Worlds Interpretation. It is emphatically not the dominant interpretation, in any way whatsoever. As a matter of fact, although I was brought up on variants of the Copenhagen Interpretation, I recently read Carlo Rovelli's Helgoland and found his exposition of the Relational Interpretation quite compelling. (This is a fairly recent interpretation that postdates Baggott's book. I'd be interested to know what he thinks about it.) Rovelli reminds the reader that QM takes no position on "what happens" in between the interactions of a quantum system. QM is solely concerned with predicting what those interactions will be (albeit in a probablistic manner).

So QM doesn't really answer your question of "what happens" to a photon in between emission and absorption. We model the evolution of QM systems mathematically by their wave function, ψ, but as Born realised, ψ does not correspond to any physical property. Rovelli's view is that we cannot even assume QM entities have a continuous existence in any meaningful way. At least, they can't be said to have any defined properties in between interactions. After all, we can only determine what they are by measurement - which involves interacting with them.

The Relational Interpretation also takes the view that the wave function can in general be different for different observers, according their informational frame of reference. For example, Schrodinger's Cat has a wave function comprising a superposition of "alive" and "dead" states for anyone outside the box, but for anyone inside the box, the cat is either alive or dead, so its wave function is not a superposition: different wave function for the same system, depending on informational frame of reference. So there's a nice analogy here with Einstein's relativity.

Light, which you mention in your question, is actually an awkward special case for standard QM since, as massless entities, photons cannot be treated by Schrödinger's equation, because that contains mass in the denominator of the Hamiltonian. So they don't have a wave function in the Schrödinger sense. This is dealt with by the theory of Quantum Electrodynamics (QED), which I'm afraid is out of my scope as a chemist. Perhaps James R can fill that in for you.
 
Last edited:
Oh I'm far from expert. I'm not even a physicist, just a chemist. But as a chemist I worked with QM and its implications for the interactions of atoms and molecules every single day of my university studies, so I'm pretty familiar with those aspects of quantum theory. Much later, in the 90s, there was a time when Jim Baggott (https://en.wikipedia.org/wiki/Jim_Baggott ) and I worked together at Shell and we used sometimes to argue about interpretations of QM in the pub. He handed out copies of his book "The Meaning of Quantum Theory" when it came out in 1992. I have an autographed copy from him. I admit my stance on such questions is coloured by those discussions.
Congrats!
I will address James R since light is my special area of interest.
 
The standard (Copenhagen) interpretation of quantum mechanics says that photons can be described as probability waves (roughly speaking). So, light propagates from A to B as a wave. When something measures the light, the wavefunction "collapses" and a photon will be observed (e.g. absorbed by a detector of some kind) at a particular point in space. The probability that the photon will be observed at different points is determined by the wavefunction at each point.

The many world interpretation says something a little different. It says that each time there is a "measurement" (e.g. of the position of the photon) the universe splits into multiple universes. In one universe, observers will see the photon at one point. In another universe they will see it at a different point, and so on for all the possibilities of where the photon might be observed.

Importantly, there is no experimental test that can determine whether the Copenhagen interpretation or the many worlds interpretation (or some other interpretation) of quantum mechanics is the "reality".
I appreciate and even admire your fealty (only word I can think of) to quantum mechanics, it is little short of miraculous and wonderful. My question is this: an incoming photon with a 500 nm wave length, has a size that is 168 million times the size of the classical radius of the electron at $$ 2.81 x 10^{-15} m $$ . The classical radius of the electron, often called the "classical electron radius," is derived from classical electromagnetic theory and is given by the formula:

$$r_e = \frac{e^2}{4 \pi \epsilon_0 m c^2}$$

where:
  • e is the elementary charge,
  • $$ \epsilon_0$$ is the vacuum permittivity,
  • m is the mass of the electron,
  • c is the speed of light.
How does quantum mechanics deal with this problem of size? Rather than considering photons and electrons as point particles with well- defined sizes, quantum mechanics describes them as probabilistic entities. Photons, though they exhibit wave-like properties, are described by quantum electrodynamics (QED) as quantized excitations of the electromagnetic field, lacking a strict spatial extent. In this framework, photons are understood to interact with electrons through a probability distribution based on their wave-functions, rather than through direct, localized contact (Feynman, 1985). The Heisenberg uncertainty principle further implies that a photon‘s precise position and momentum cannot be simultaneously known with arbitrary precision, so the interaction is governed by probability amplitudes. The interaction cross-section, rather than physical size, plays a pivotal role in quantum mechanics. The probability of an electron interacting with a photon is governed by the electron-photon interaction cross-section, derived from QED calculations.

But consider once again the question of what 168 million times the size of means. For instance for Dark Matter to represent 90% of the mass of the Universe requires only 5 time the amount of normal matter. An average atom has a size of $$10^{-10}$$ m and probably the wave function of the electron fits in this atom? Therefore, IMHO no consideration of the electron as cloud or wave function or whatever can explain how this size discrepancy works.
 
Last edited:
I appreciate and even admire your fealty (only word I can think of) to quantum mechanics, it is little short of miraculous and wonderful. My question is this: an incoming photon with a 500 nm wave length, has a size that is 168 million times the size of the classical radius of the electron at $$ 2.81 x 10^{-15} m $$ . The classical radius of the electron, often called the "classical electron radius," is derived from classical electromagnetic theory and is given by the formula:

$$r_e = \frac{e^2}{4 \pi \epsilon_0 m c^2}$$

where:
  • e is the elementary charge,
  • $$ \epsilon_0$$ is the vacuum permittivity,
  • m is the mass of the electron,
  • c is the speed of light.
How does quantum mechanics deal with this problem of size? Rather than considering photons and electrons as point particles with well- defined sizes, quantum mechanics describes them as probabilistic entities. Photons, though they exhibit wave-like properties, are described by quantum electrodynamics (QED) as quantized excitations of the electromagnetic field, lacking a strict spatial extent. In this framework, photons are understood to interact with electrons through a probability distribution based on their wave-functions, rather than through direct, localized contact (Feynman, 1985). The Heisenberg uncertainty principle further implies that a photon‘s precise position and momentum cannot be simultaneously known with arbitrary precision, so the interaction is governed by probability amplitudes. The interaction cross-section, rather than physical size, plays a pivotal role in quantum mechanics. The probability of an electron interacting with a photon is governed by the electron-photon interaction cross-section, derived from QED calculations.

But consider once again the question of what 168 million times the size of means. For instance for Dark Matter to represent 90% of the mass of the Universe requires only 5 time the amount of normal matter. An average atom has a size of $$10^{-10}$$ m and probably the wave function of the electron fits in this atom? Therefore, IMHO no consideration of the electron as cloud or wave function or whatever can explain how this size discrepancy works.
If you use a fictitious and artificial concept like the classical, repeat classical, electron radius, you can hardly be surprised if you get into difficulty. The classical electron radius is just the number you get from the formula for classical electrostatic energy, applied to concentrating a charge equal to that on the electron into a volume such that the resulting energy is equal to the electron’s rest mass.

It has no relevance to the process by which an electron absorbs a photon. A free electron cannot absorb a photon. The electron has to be in a bound state of some kind, so that (loosely speaking, i.e. without a long excursion into transition dipole moments) a dipole can exist which can absorb energy from the radiation. You can read more about transition dipole moments here:https://en.wikipedia.org/wiki/Transition_dipole_moment, from which you will see they are do with wave function phase, i.e. the wavelike nature of electrons. If you try to treat them as classical particles you will get nowhere.
 
quant:
I appreciate and even admire your fealty (only word I can think of) to quantum mechanics, it is little short of miraculous and wonderful.
There's no more accurate theory available that fits all the experimental and observational data, as far as I am aware, so my fealty (as you call it) really shouldn't come as such a surprise to you. I'm just going with the best science has to offer so far.

Do you have a better theory?
My question is this: an incoming photon with a 500 nm wave length, has a size that is 168 million times the size of the classical radius of the electron at $$ 2.81 x 10^{-15} m $$ .
Okay. So what?
How does quantum mechanics deal with this problem of size?
What's the problem?

Atoms typically have sizes of about $10^{-10}$ m, but the energy spacing of atomic energy levels corresponds, in some cases, to the energy of visible photons, including ones of wavelength 500 nm. That's why those atoms can absorb and emit light of that wavelength.

Rather than considering photons and electrons as point particles with well- defined sizes, quantum mechanics describes them as probabilistic entities. Photons, though they exhibit wave-like properties, are described by quantum electrodynamics (QED) as quantized excitations of the electromagnetic field, lacking a strict spatial extent. In this framework, photons are understood to interact with electrons through a probability distribution based on their wave-functions, rather than through direct, localized contact (Feynman, 1985). The Heisenberg uncertainty principle further implies that a photon‘s precise position and momentum cannot be simultaneously known with arbitrary precision, so the interaction is governed by probability amplitudes. The interaction cross-section, rather than physical size, plays a pivotal role in quantum mechanics. The probability of an electron interacting with a photon is governed by the electron-photon interaction cross-section, derived from QED calculations.
That's all fine. Where's the problem, again?
But consider once again the question of what 168 million times the size of means. For instance for Dark Matter to represent 90% of the mass of the Universe requires only 5 time the amount of normal matter.
What has dark matter got to do with the classical radius of the electron?

Also, not that it matters, but are you sure you've got your numbers right with the 90% and 5 times thing, there?
An average atom has a size of $$10^{-10}$$ m and probably the wave function of the electron fits in this atom?
The wave function of the electron is, technically, spread throughout the whole of space. All wavefunctions are.
Therefore, IMHO no consideration of the electron as cloud or wave function or whatever can explain how this size discrepancy works.
I must be missing something. You don't seem to have said anything that raises a problem for quantum mechanics, or anything else. You've mentioned size discrepancy as raising some kind of issue. What's the issue, then? Explain.
 
Last edited:
an incoming photon with a 500 nm wave length, has a size that is 168 million times the size of the classical radius of the electron
On top of the problem of the "size" of an electron that others have pointed out, there is also the error of mistaking the wavelength of light for the "size" of a photon.

There are microwave photons that have wavelengths in the centimetre range; radio photons in the metre - and even kilometre - range. One would not say a radio photon's "size" is one kilometre.
 
Last edited:
On top of the problem of the "size" of an electron that others have pointed out, there is also the error of mistaking the wavelength of light for the "size" of a photon.
Use a microwave without a grid.


There are microwave photons that have wavelengths in the centimetre range; radio photons in the metre - and even kilometre - range. One would not say a radio photon's "size" is one kilometre.

You are really beginning to catch on now. What are your conclusions from looking at these different photon sizes?
 
You are really beginning to catch on now. What are your conclusions from looking at these different photon sizes?
I think you may have missed the point: the wavelength of a photon has nothing to do with its size.
 
I think you may have missed the point: the wavelength of a photon has nothing to do with its size.
Presumably, though (I'm rusty on this ) its "size" depends on the relative contributions from the wavelengths that go to make up the wave packet, i.e. on how close to "monochromatic" the photon can be said to be, cf. Heisenberg Uncertainty Principle, Fourier series etc.
 
Presumably, though (I'm rusty on this ) its "size" depends on the relative contributions from the wavelengths that go to make up the wave packet, i.e. on how close to "monochromatic" the photon can be said to be, cf. Heisenberg Uncertainty Principle, Fourier series etc.
His problem is that none of these objects are solid little billiard balls that bounce around on a table. Talking about them as if amplitudes and wavelengths interact with diameters is naive and meaningless. He triyng to apply macro object physics to the subatomic world, which can only lead to tears and heartbreak.
 
Much as it grieves me to say so(!), I think quant is approximately correct here. The resolving power of an observation is proportional to the wavelength of the radiation used to makes that observation.. Hence, to "see" very small objects you need very short wavelrngth radiation.

Think about the light microscope vs. the electron microscope, although it's not obvious to me where wavepackets enter the picture, even less Heisenberg or Fourier
 
Much as it grieves me to say so(!), I think quant is approximately correct here. The resolving power of an observation is proportional to the wavelength of the radiation used to makes that observation.. Hence, to "see" very small objects you need very short wavelrngth radiation.

Think about the light microscope vs. the electron microscope, although it's not obvious to me where wavepackets enter the picture, even less Heisenberg or Fourier
Yes it's interesting to think about what it can actually mean to speak of the "size" of a photon. It's obviously true that EM waves diffract round objects and the degree to which they can is a function of wavelength. And, in the extreme case of a single photon (e.g. in the double slit experiment), even one individual photon can diffract in this way.

But surely, according to the Uncertainty Principle, a single photon that is truly monochromatic can have no particular position in space: it can be detected with equal, infinitesimally low probability, anywhere along its direction of travel. So its "size", in that sense, is sort of infinite. Whereas a photon whose probability of being detected has a limited spatial extent must be non-monochromatic, being composed of a Fourier sum of different wavelengths that interfere constructively only in a limited region of space (i.e. wave packet) and thereby must have an uncertainty in its momentum (and energy).

That at least is what I was thinking. Is that wrong?
 
Much as it grieves me to say so(!), I think quant is approximately correct here. The resolving power of an observation is proportional to the wavelength of the radiation used to makes that observation.. Hence, to "see" very small objects you need very short wavelrngth radiation.
quant is talking about something different to the wavelength. He is talking as if photons are particles of a certain diameter. If they are, then that's largely irrelevant to how they interact.

The resolving power stuff is all about diffraction of waves. Photons have an associated wavelength, of course, so they exhibit wave-like properties as well as particle-like properties.

quant is confusing the "size" (diameter) of a particle with the wavelength of a wave.
 
Back
Top