Quantum Quackery Cracked? - Double Slit Experiment

Penrose is out on a limb with his claim that consciousness requires quantum gravity to work. A lot of very bright people disagree with him. In particular, his idea that quantum processes in microtubules in neurons are significant to consciousness seems unlikely given that any quantum coherences there would most likely be lost very rapidly due to external sources of "noise".
 
Bravowon:

The double slit experiment was quite an interesting read but I don't really see the mystery in it. Is an electron a particle or a wave? From looking at the experiment this is what seems logical to me:
The electron particle is fired from the generator. This has a secondary result of creating a wave-front of unknown subatomic properties.
The electron then rides the wave like a little surfer. When the wave hits the single slit the “surfers” make a beeline for the back of the receptor screen – hence particle distribution pattern.
Then when the double slit is introduced the “surfers” are caught up in the interference pattern and are washed up like driftwood in the wave interference pattern.
Introducing the observation device somehow attenuates the “unknown wave” frequency and the little “surfers” are forced off the wave and resume a normal particle distribution pattern.

There are a couple of potential problems with this idea. First, you are introducing "unknown subatomic particles" that apparently can't be detected by normal means. Secondly, you need the normal matter (such as electrons) to be able to interact with your undetectable particles, such that electrons can ride the wave. But what's the nature of that interaction? How do your undetectable particles exert forces on electrons? What kind of force is it? As you can see, introducing another entity opens a few cans of worms.

Second, how does your model help explain the very low intensity version of the 2-slit experiment, where you fire just one electron at a time at the slits? In that case, we observe that electrons still form an interference pattern over time, and that there are some places on the screen where electrons never land. Do you propose that your guiding waves always exist at the same strength no matter what the strength of the source in terms of "normal" particles? Do your undetectable particles carry energy at all? If so, why does energy seem to be conserved when we only consider normal particles?

The quantum picture, in case you're not completely aware of it, does not describe electrons as particles riding some kind of wave. Rather, it describes the electrons themselves as waves: waves of probability (speaking a little loosely). Essentially, quantum mechanics says there are no point-like particles - just waves that under some circumstances appear to exhibit the properties of pointlike particles.

It seems that you just have to find what is causing the wave and stop looking at the electron. If you did the double slit experiment with water in a pond and added small polystyrene balls the result would look like the quantum results if you accounted only for the polystyrene strike pattern.
Could this be right or do I have to put up with Deepak Chopra style babble for a bit longer?

This kind of picture you have may be able to explain the 2-slit experiment, but I think you'll have some trouble with some other experiments. For example, you might want to look at interferometers. In that case, we can set things up so that electrons (or, more usually, photons) should, if they acted like particles, be detected in equal amounts (50-50) at two different detectors. But in fact, they are detected 100% at one detector and zero at the other.

Also, if your model is to be generally applicable, then it will also need to be able to account quantitatively for things such as the emission spectra of atoms - something that standard quantum mechanics explains extremely well.

I guess the point I'm making is that while you can invent an ad hoc hypothesis to account for the outcome of just about any experiment you want to do, to come up with a physical theory of general applicability is a much trickier proposition.
 
Regarding consciousness and quantum physics.

Quantum physics has already been used to explain the otherwise inexplicable efficiency of photosynthesis.

By hitting single molecules with quadrillionth-of-a-second laser pulses, scientists have revealed the quantum physics underlying photosynthesis, the process used by plants and bacteria to capture light’s energy at efficiencies unapproached by human engineers.

The quantum wizardry appears to occur in each of a photosynthetic cell’s millions of antenna proteins. These route energy from electrons spinning in photon-sensitive molecules to nearby reaction-center proteins, which convert it to cell-driving charges.

Almost no energy is lost in between. That’s because it exists in multiple places at once, and always finds the shortest path.


http://www.wired.com/wiredscience/2010/02/quantum-photosynthesis/

Nature does not discriminate, and uses any and every natural phenomenon.

I've just come over all Michio Kaku. Speculation.
What if Consciousness is a natural phenomena that nature is using rather than one which comes into being through it's operations?
 
I guess the point I'm making is that while you can invent an ad hoc hypothesis to account for the outcome of just about any experiment you want to do, to come up with a physical theory of general applicability is a much trickier proposition.

It was pointed out to me that a similar theory has already been posited in the De Broglie-Bohm or Pilot Wave theory which, although out of favor, does have some merit.
 
Bravowon said:
I think I have worked out how to teach a computer to develop a consciousness. Too many people try to program the whole effort but my approach involves setting up tolerances and leaving the rest up to the computer. Of course I would need the right frame to put the “mind” into or the the personality might be a bit AAAHhhhh.
Stay tuned....crazy pseudo-science to come.
Actually this is closer to home for me than Physics. I think you're on the right track too...most of the strong AI research I had ever come across seemed to envision making a program, turning it on, and testing it for intelligence. Think of our baseline for strong AI, though -- human beings, which take months or even years of interacting with their environment to display true intelligence, and this is with the proper hardware!

My theory has always been to give the AI machine adequate environment interaction, the ability to organize its data, and (most importantly IMO yet seems to be overlooked) a MOTIVE for action. Humans have a motive of survival, and are presented with a finite resources over which they must fight. A computer lacking a similar motive will display no interesting emergent behavior that wasn't explicitly instilled in it.
 
As a populariser of Cosmology and Physics, I think Roger Penrose strikes the right balance.
....

Penrose is an idiot when it comes to mind and consciousness. (as well as his stupidity in Cosmology recently - search on pre-big-bang) He's a mathematician and should stay away from other sciences.
 
Actually this is closer to home for me than Physics. I think you're on the right track too...most of the strong AI research I had ever come across seemed to envision making a program, turning it on, and testing it for intelligence. Think of our baseline for strong AI, though -- human beings, which take months or even years of interacting with their environment to display true intelligence, and this is with the proper hardware!

My theory has always been to give the AI machine adequate environment interaction, the ability to organize its data, and (most importantly IMO yet seems to be overlooked) a MOTIVE for action. Humans have a motive of survival, and are presented with a finite resources over which they must fight. A computer lacking a similar motive will display no interesting emergent behavior that wasn't explicitly instilled in it.

Right on target. And this was my approach some 25 years ago which I played with very minimally for a while before moving on to other stuff.
 
It was pointed out to me that a similar theory has already been posited in the De Broglie-Bohm or Pilot Wave theory which, although out of favor, does have some merit.

Those theories don't give a self-consistent explanation for nonlocality in quantum phenomena. Read about Bell's theorem for some info on how this works.

In theoretical physics, Bell's theorem (AKA Bell's inequality) is a no-go theorem, loosely stating that:

No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

Basically, if you want to explain quantum mechanics (and experiments) in a way that doesn't contradict Relativity, good luck doing that without accepting genuine randomness in the universe.
 
CptBork said:
Those theories don't give a self-consistent explanation for nonlocality in quantum phenomena. Read about Bell's theorem for some info on how this works.
My recollection is that Bohm abandoned pilot waves after Bell's Theorem, and John Bell himself rescued them again, later, saying that it was simply a non-local yet valid quantum theory.
 
My recollection is that Bohm abandoned pilot waves after Bell's Theorem, and John Bell himself rescued them again, later, saying that it was simply a non-local yet valid quantum theory.

Well if you can show me how Bell rescued pilot wave theory, be my guest. As far as disproving locality, I cite the following quote from the Wikipedia article on John Stewart Bell:

In 1972 the first of many experiments that have shown (under the extrapolation to ideal detector efficiencies) a violation of Bell's Inequality was conducted. Bell himself concludes from these experiments that "It now seems that the non-locality is deeply rooted in quantum mechanics itself and will persist in any completion."[8] This, according to Bell, also implied that quantum theory is not locally causal and cannot be embedded into any locally causal theory.

Now in Relativity, a non-random non-local cause would create time paradoxes.
 
CptBork, granted I haven't had much coffee this morning, but I don't get your point. Are you claiming that Bohm's Pilot Wave theory has been proven inconsistent or not? My point to the thread starter was that his interpretation of the double-slit experiment was equivalent to the Pilot Wave theory, which was a valid interpretation. Do you believe that Bell's Theorem disqualifies the theory?
 
My recollection is that Bohm abandoned pilot waves after Bell's Theorem, and John Bell himself rescued them again, later, saying that it was simply a non-local yet valid quantum theory.
Well if you can show me how Bell rescued pilot wave theory, be my guest.
My own understanding is that Bohm's interpretation was what motivated Bell's theorem: Bohm's theory has a contorted, non-local structure, and the point of Bell's theorem was that any similar interpretation of quantum mechanics would suffer these same features.

As far as disproving locality, I cite the following quote from the Wikipedia article on John Stewart Bell:

In 1972 the first of many experiments that have shown (under the extrapolation to ideal detector efficiencies) a violation of Bell's Inequality was conducted. Bell himself concludes from these experiments that "It now seems that the non-locality is deeply rooted in quantum mechanics itself and will persist in any completion."[8] This, according to Bell, also implied that quantum theory is not locally causal and cannot be embedded into any locally causal theory.
Now in Relativity, a non-random non-local cause would create time paradoxes.
If it's not clear from the quote, Bell considered relativity and quantum theory to be incompatible with one another on a fundamental level. He thought the experimental success of quantum mechanics spelled serious trouble for relativity.
 
Basically it's a choice between locality and counter-factual definiteness (aka reality) for most QM interpretations. I had the impression that CptBork was suggesting that BECAUSE pilot waves was a non-local theory it was bunk. Its non-locality does not disqualify it as a valid QM interpretation.
 
Basically it's a choice between locality and counter-factual definiteness (aka reality) for most QM interpretations. I had the impression that CptBork was suggesting that BECAUSE pilot waves was a non-local theory it was bunk. Its non-locality does not disqualify it as a valid QM interpretation.

You can't have nonlocality, causality and Relativity in a single theory, there's an internal inconsistency there if you try. Faster than light (nonlocal) signals imply a violation of causality under Relativity, and quantum experiments have for several decades already established that there's a nonlocal effect. Throw Relativity out the window if you want, but there's no good experimental reason to, whereas if you accept randomness as a fundamental part of the universe, then you can use existing theories to match experiments very nicely, no violations of causality occur because no actual information gets carried faster than light.

If it's not clear from the quote, Bell considered relativity and quantum theory to be incompatible with one another on a fundamental level. He thought the experimental success of quantum mechanics spelled serious trouble for relativity.

I think I misunderstood the quote. I thought he was ruling out any attempt to describe QM using conventional localized mechanics, hence necessitating the acceptance of randomness in order to preserve what we know to be true (based on every experiment to date) from Relativity.

Basically it's a choice between locality and counter-factual definiteness (aka reality) for most QM interpretations. I had the impression that CptBork was suggesting that BECAUSE pilot waves was a non-local theory it was bunk. Its non-locality does not disqualify it as a valid QM interpretation.

What I'm saying is any nonlocal, causal theory is bunk. That or Relativity is bunk, but it's already been 100 years and I don't see Relativity going out the window any time soon, at least not in the case of flat space.
 
Last edited:
I'm going to throw something out here that I really don't understand.

I've read that that non-locality, while demonstrated by the Alain experiments of the Bell hypothesis, and by the experiments of entangled particles, is a fact of the quantum realm, it does not result in any actual information being transmitted. (I must admit that I don't understand that point) If there is no information transmitted instantaneously, there is no violation of causality and the limitation of the speed of light.

Can anyone expand on this?
 
AlexG said:
I'm going to throw something out here that I really don't understand.

I've read that that non-locality, while demonstrated by the Alain experiments of the Bell hypothesis, and by the experiments of entangled particles, is a fact of the quantum realm, it does not result in any actual information being transmitted. (I must admit that I don't understand that point) If there is no information transmitted instantaneously, there is no violation of causality and the limitation of the speed of light.

Can anyone expand on this?
The rub is that we apparently cannot determine (and by that I mean choose) the result of a quantum measurement. You and I are light-years away, each possessing one half of a spin-coupled pair of particles from an EPR-type experiment, we're both free to measure the spin along any axis we choose but we cannot decide if it will be + or -. We also cannot determine (and by that I mean "become aware of") when the remote particle has been measured, or along which axis it has been measured**. If we could do any of these things: know "when" the remote particle has been measured, know which remote axis has been measured, or choose a local quantum measurement result, then we could transmit data FTL. As it stands now, even though something appears to be happening FTL, nature does "just enough" to disallow information transmission.

**Although it may seem that there's a clever way around these limitations, there is not. Colluding beforehand on "when" a quantum measurement will be made and along which axis it will be made does nothing - you need to be able to differentiate between TWO options in order to send information (e.g. "if the Chicago Cubs win the World Series, I'll measure the z-axis spin, otherwise I'll measure the x-axis spin").
 
I think I misunderstood the quote. I thought he was ruling out any attempt to describe QM using conventional localized mechanics, hence necessitating the acceptance of randomness in order to preserve what we know to be true (based on every experiment to date) from Relativity.
Bell's own view was that he was ruling out locality, period. Proofs of Bell inequalities don't explicitly assume "counterfactual definiteness"[sup]1[/sup], and some versions explicitly don't assume determinism. The bound follows completely from locality and the so-called "free choice" hypothesis[sup]2[/sup]. Bell inequalities still apply to the more general class of fundamentally stochastic locally causal theories.

Historically, Bell's theorem originally appeared as a response to the 1935 EPR paper, whose authors argued that QM couldn't be considered a "complete" theory, basing their case on an example of two spatially separated particles entangled in position/momentum. For anyone who isn't already familiar with the EPR argument, it goes something like:
  1. QM presents us with a dichotomy: there are sets of complementary observables, such as position and momentum, which are not simultaneously well-defined. We must accept either that
    • the wavefunction/state description of QM is incomplete, or
    • if we want to retain QM, we have to accept that eg. position and momentum do not have simultaneous reality.
  2. Assume QM is a complete theory, so we reject (1a) and consequently we're forced to accept (1b).
  3. QM allows for the existence of pairs of particles in entangled states. Suppose particles A and B are spatially separated and entangled in position/momentum. Then:
    • If I measure A's position, then B is projected onto a particular position state. It's position is completely determined before any measurement is made and therefore B has a "real" (ie. exact, predetermined) position.
    • If instead I measure A's momentum, then B is projected onto a momentum state and, by the same argument, has a "real" momentum.
  4. We come to the conclusion that B must have both a real position and a real momentum.
  5. This contradicts point 2. Reductio ad absurdum. We're forced to accept that (1a) is true.
The authors ended their article noting only a single caveat: step 4 only follows from step 3 because they were assuming that whether B's position and momentum are "real" should be independent of which measurement I choose to make on A a long distance away. You escape the conclusion if you allow the choice of measurement on A to instantaneously toggle which of B's position or momentum is predetermined, which violates locality.

What often seems to be misunderstood is that Bell actually sympathised with Einstein on this point: basically, his response amounted to something like "yes, unfortunately QM is non-local, but sorry, it doesn't look like there's much we can do about that." It wasn't originally a question of whether "classical" physical principles could be restored by "hidden variables".

There's a good exposition of Bell's views, which makes for reasonably light reading, available here on arXiv. The EPR paper is also available here.

*****

[sup]1[/sup]By this I mean that Bell's theorem isn't based on any assumptions about whether eg. particles can have a simultaneously well-defined position and momentum. Giving up on that and for example accepting Heisenberg uncertainty doesn't help you restore locality. Bell inequalities only rely on the assumption that the results of measurements are specific and "real".

[sup]2[/sup]This is an assumption, consistent with experience and arguably necessary for applying the scientific method at all, that arbitrarily strong correlations between systems don't exist in nature, or at least that it's possible to "insulate" against them when necessary in practice. Otherwise you could claim that nature is local and that it's only some detail of the initial conditions of the universe that caused it to evolve in such a way that every Bell experiment came out "wrong". You could use that explanation against any experimental result you didn't like.
 
Penrose is an idiot when it comes to mind and consciousness. (as well as his stupidity in Cosmology recently - search on pre-big-bang) He's a mathematician and should stay away from other sciences.

Maybe. Once these academicians get their legs under the table as Professor at some renowned institution, they seem to suffer from delusions of grandeur.
Perhaps his earlier books are the best read.

Would you say that the same goes for Hawking?
I watched his recent TV series, and I thought I could see him morphing into Kaku at times.
I hate to say that because he is one of my lifetime heroes.
 
CK, yeah a bit. Getting a bit woo-woo I think but at least he still seems to have a brain in his ravaged body.
 
Can anybody confirm the validity of the Luminiferous aether hypothesis? It seems that the “hidden variable” factor may have some credibility here. I think my case of cognitive dissonance leans towards the idea that we are immersed in a sea of “matter” that we can not detect (does a fish know it is in water). Here is an experiment conducted in Germany that may need to be quantified further but is quite interesting nevertheless (ie. could just be mirror strain):
blog.hasslberger.com/2009/09/extended_michelsonmorley_inter.html#more
QM at the moment seems to go around the sun to get to the moon and I love and hate that fact at the same time.
 
Last edited:
Back
Top