In 1898, William James was engaged by Harvard to lecture on the subject of “human immortality”. He devoted the lecture to the hypothesis that the brain does not create consciousness, but rather that the brain is a connector between consciousness and physical reality. Radio broadcasts were the new technology of that era, and he used the metaphor that the brain was like a radio transmitter and receiver, connecting the world of space and matter to a discarnate realm of the mind, Descartes’ res cognitans.
My thesis in this essay is an extension of WJ’s premise. Consciousness is not something to be explained by something physical. Consciousness is more fundamental than physical matter, and the physical world exists because consciousness created it. Last week, I presented two arguments in favor of this idea: (1) Quantum physics needs an “observer” outside of physical matter to be complete and consistent. (2) There is solid experimental evidence that human intention can affect quantum probabilities. Below, I will present three more: (3) McFadden’s theory of evolution as guided by the Inverse Quantum Zeno Effect; (4) The Anthropic Coincidences suggest that the fundamental laws of physics were rigged in such a way as to make life possible; and (5) near-death experiences and other rare and surprising phenomena attest to the fact that conscious awareness and even perception can happen without a physical brain. As promised, I’ll conclude with some further proposed experimental tests of this paradigm.
Where did the rich diversity of life on earth come from? We’ve been offered two competing stories.
According to the first, God created the world in 6 days. Day 3 was devoted to all of plant life, Day 5 the animals of the oceans and Day 6 the land animals. Before he rested on Day 7, God gifted the whole kit & kaboodle to Man, all life on earth to use or abuse as he saw fit.
In the second story, some chemicals came together by chance in a tidal pool or a hydrothermal vent that happened to have the property of autocatalysis, which is to say that this set of chemicals, collectively, caused simpler chemicals in the environment to react together to form more of the autocatalytic set. The fidelity of this process was not 100%, so the chemicals changed slightly as they generated more of themselves. Every once in a while, the new chemicals turned out to be not just a less faithful copy of their “parents”, but actually better at autocatalysis. Thus began a Darwinian process of random mutation and natural selection, which accounts for the full diversity of life on earth.
How plausible are these stories? People who believe story #1 don’t worry about plausibility, because they trust the divine origin of their story. Curiously, most people who believe story #2 also don’t think about plausibility, because they trust the scientists who have thought carefully about the question and they don’t trust the Bible thumpers who peddle story #1.
The least plausible part of story #2 is the pre-Darwinian origin of life: how did non-living matter first find its way into a system that was capable of reproducing and evolving? I wrote about problems with the story of the chemical origin of life last year, featuring the work of synthetic chemist James Tour.
But Darwinian evolution itself enjoys its universal dominance in the scientific mainstream less because of scientific evidence than because of political dynamics. Evolutionary scientists who try to question the plausibility of Darwinian evolution on grounds that are empirical, logical, and quantitative are told to keep quiet, lest they give comfort to the Bible thumpers. One such was Sir Fred Hoyle, a prolifically creative astronomer of the mid 20th Century who read and understood matters outside his primary field. His prominence in the scientific community gave Hoyle freedom to speak his mind, and toward the end of his life, he wrote,
Life as we know it is, among other things, dependent on at least 2000 different enzymes. How could the blind forces of the primal sea manage to put together the correct chemical elements to build enzymes?…The chance that higher life forms might have emerged in this way is comparable to the chance that a tornado sweeping through a junkyard might assemble a Boeing 747 from the materials therein. — The Intelligent Universe (1983)
More recently, Richard Watson has laid out the scientific case for an alternative evolution, including non-Darwinian processes, in a series of five videos. Watson is professor of biology at University of Southampton. He does not deny the fossil record or the analysis of genetic relatedness or the history of life evolving; he does question whether transitions from one life form to another can be completely explained within the confines of the neo-Darwinian framework. In the neo-Darwinian framework, there is only the blind and random occurrence of genetic mutations, and the process of natural selection, by which those mutants that reproduce more rapidly come to dominate the population. Another prominent scientist who has issued credible critiques of neo-Darwinism is Dennis Noble.
My speculation here follows JohnJoe McFadden more closely than Watson or Noble. Yes, evolution gradually created the diversity of life on earth from simple life forms. But no, that process cannot be fully explained by random mutation and natural selection. There has been a guiding hand of consciousness, altering the quantum probabilities so as to create the rare, pivotal events that led to new adaptations. The richness and complexity of life on earth is not a self-organizing, emergent phenomenon but the creation of beings that already had consciousness but wanted physical bodies in which to incarnate.
Johnjoe McFadden is a professor of molecular genetics at University of Surrey, UK. In 2004, he gave us a book which, for the first time, offers a basis in quantum physics for understanding the fossil record of evolution. Natural selection works as Darwin described it, but mutations are not “random” events. They are guided by consciousness, a process akin to biasing quantum randomness documented by Jahn and Dunne. Consciousness is directing evolution in order to create ever more complex and interesting vehicles for itself.
You’ve probably heard that one of the founding principles of QM is that every measurement changes what is being measured. This is the basis for the Uncertainty Principle, which inspired Heisenberg to invent his quantum formalism. In an application of this idea, repeated measurements can maintain a system in a quantum state from which it might (without these measurements) gradually drift away. This is called the Quantum Zeno Effect, named for the Greek philosopher who “proved” that motion is logically impossible.
Drifting from a particular state is not mysterious. It is the quantum analog of a classical system moving when it is subjected to forces. If you make a measurement of an electron and the result comes up equal to X, then if there are forces at play, it won’t stay X for long. But if you don’t wait, but make the same measurement again, immediately after the first, there is a high probability that the result will again come out equal to X.
Repeating the same measurement often enough, you can stop the electron from drifting away from state X. This is a uniquely quantum effect, with no analog in classical mechanics.
(Technically: If the time between your repeated measurements is Δt, then the probability that you get a different result is proportional to Δt2, so if Δt is small, then the probability that one time you’ll be surprised and the repeated measurement will come out differently doesn’t accumulate as fast as time is passing. By making the measurements at very close time intervals, you can keep the system in the same state for a long time with high probability.)
The Inverse Quantum Zeno Effect is a prescription for making “measurements” that gradually change, and drag the system along in tiny jumps from the state where you find it to the destination state of your choosing.
For example, suppose the tiny magnet that is an electron is pointing North, and you want it to point South. If you measure the magnet’s N/S orientation over and over, it will remain pointing N. That’s the QZE, and it’s the opposite of what you want. But suppose you rotate your measuring tool through 90 degrees and measure the East-West magnetic field—what happens then? Answer: Half the time, it will come out East and half the time West. Next repeat the measurement “North or South?” and (whether it is now E or W, either way) half of the time it will choose South. This happens without waiting for precession to happen. The two successive measurements can in fact be almost immediately after the first, but still you find that half the time the North orientation has switched to South. This simple, two-step procedure moves half the electrons from N to S orientation.
(If you’re not familiar with this property of quantum measurement, you might want to reread the last paragraph and realize how strange it is. What you choose to measure forces a choice on the system, and that forced choice actually moves the system to a new state.)
You can do better than half. Measure the electron along the 45 degree axis, so it must choose NE or SW. Then rotate another 45 degrees and measure E or W, then NW or SE, finally measure N or S again. Now (if I’ve done the calculation correctly) you’ll find the magnet has turned from N to S ⅝ of the time.
Continuing along this line: Instead of rotating 45 degrees each time, rotate your measuring apparatus just 1 degree to the East, so the measurement is asking “almost North or almost South?” If the apparatus is just 1 degree from North, then almost all the time it chooses “almost North” and almost never “almost South”. You can continue this process with 180 measurements spaced 1 degree apart, moving the apparatus 1 degree at a time from North to South. When you’re done making these 180 measurements, the electron’s magnetic field will now be pointing South (with very high probability).
This is the Inverse Quantum Zeno Effect. You have (very probably) moved the magnet from N to S, not by applying any force to it, but only by measuring it — and (our hypothesis is that) measurement is not a physical but a cognitive process.
This IQZE provides a plausible mechanism for William James’s proposal, that the brain transcribes intent from a conscious but disembodied entity and gives that intent power to create nerve impulses and, thereby, to move the body. And long before there were brains, the IQZE was working on the simpler task of guiding genetic mutations and other small changes which have led life through the evolutionary diaspora.
Physics, as a science, has to start somewhere. There are certain facts about the world and laws of nature that are taken as given, and we don’t ask “why?”. Space has 3 dimensions. The force of gravity is forty orders of magnitude smaller than the force of electric attraction and repulsion. The speed of light is 3×1010cm/sec. The mass of an electron is 9.11×10-28 g. This is just the way the world happens to be.
But beginning about 1970, astrophysicists had a good enough idea about the Big Bang and subsequent history of the universe, about how stars form and where the chemical elements come from — they could ask for the first time, “what if these fundamental constants of nature were different?”
There were many public discussions, many articles [e.g. Carr and Rees], three books that I would recommend, by Barrow and Tipler, Paul Davies, and Martin Rees.
The conclusion is: All of these numbers have to be just right, or the world can’t support life. There are many contingencies on the way from the Big Bang to the human brain, and each one is a fantastically improbable Just So Story.
Chemistry is one of the things we take for granted. Technically, we might argue for the possibility of some kind of life not based on chemistry, but it is probably beyond our ability to imagine. So many things have to be just right in the “recipe” for our universe in order for chemistry to be possible.
Chemistry is quantum. The very existence of atoms is quantum. If our physics was ruled by Newtonian mechanics or any rules that you might regard as less strange than QM, there would be no chemical elements with such diverse chemical affinities and bonding properties.
[Historic footnote: physicists at the end of the 19th century knew enough about atoms to know there were negative electrons circling a positive nucleus. But they only had classical mechanics. According to classical mechanics, the electron should give up its energy as light, and spiral into the nucleus. Atoms should self-destruct in a tiny fraction of a second. They were puzzled. Max Planck took the first step toward quantum mechanics when he proposed a resolution to this conundrum in 1900.]
- The fact that protons and neutrons stick together in the nucleus so that ~100 different chemical elements are possible is dependent on a very close balance between the electrical repulsion of protons and the attraction of the strong nuclear force. Tipping the balance a little toward electrical repulsion would mean that there are no elements other than hydrogen. Tipping the balance a bit toward the strong force would mean that in the first minutes of the Big Bang, all the protons would merge into super-large nuclei, and again there would be no chemistry.
- Thanks to Max Born and Linus Pauling in the 1930s, we have some idea why chemical bonds have the geometry and electrical properties that they do, but this depends on obscure details of QM. For example, two molecules that are absolutely essential for life are CO2 and H20. CO2 is gas because the molecule is in a straight line. H20 is a bent molecule with negative charge on one side and positive on the other, and for this reason it is a liquid. Life depends critically on the very special properties of water and almost as much on the special properties of carbon dioxide.
-
There is a quantum mechanical explanation for this difference in geometry, but it is highly mathematical, it is based on heuristics and approximations, and I don’t know if anyone would have predicted it were it not already a known, observed fact.
The take-home message is that all the complex chemical geometries that make life possible are highly contingent results based on the particulars of quantum physics.
- If gravity were a little weaker, stars and galaxies would never form; if gravity were a little stronger, the Big Bang would collapse back on itself before anything interesting had a chance to happen.
- In addition to the “strong nuclear force” there is a “weak nuclear force” associated with neutrinos and with certain ways that atoms can spontaneously decay. The weak force, too, is constrained by Anthropic logic to be exactly as weak as it is. Were it weaker, all the hydrogen in the universe would have turned into helium in the first three minutes, and fusion as an energy source would be unavailable to stars. Were it stronger, all the chemical elements that are formed in the late stages of a star’s history would stay inside and die with the star, and they would not be recycled into second generation stars that harbor planets and the possibility of life.
- If space had two dimensions instead of three, there wouldn’t be room for all the plumbing and circuitry that a living body depends on, because no two pipes or wires can go through one another. If space had four dimensions, gravitational orbits would be unstable and planets would bounce chaotically between much too hot and too cold for life.
- The “oomph” with which the Big Bang exploded was delicately balanced against the available matter. Had the BB exploded with less energy, it would quickly have collapsed back on itself, with no time to form stars and galaxies. Had it exploded with more energy, matter would have been flying apart so fast that gravitation would have been unable to form galaxies or stars at all.
- [Here’s one that was my personal contribution to the field when I was a grad student in astrophysics 50 years ago.] The sun’s radiation is predominantly visible light, peaking in the green part of the rainbow. This is exactly the energy that chlorophyl needs to convert sunlight to chemical energy. “Aha—” (you say) “this is not a coincidence. Plants evolved to use the available light.” Yes, to some extent this is true. But the coincidence is that the energy of green light is just enough to break the chemical bonds between C and O in carbon dioxide, but not enough to knock the atoms to kingdom come, which would be the case for UV or X-ray. The temperature at the center of the sun is millions of degrees (corresponding to X-rays), but by the time the energy diffuses out to the sun’s surface, the temperature is “only” a few thousands of degrees. The reason that the surface of stars all have about this same temperature is quite complicated, but there is no guarantee that the temperature should be just right to create light that can break a chemical bond but not so hot as to create UV, which would break organic chemicals apart.
There are many other such “coincidences”. The physical laws that just happen to be what they are also just happen to be exactly what is needed to make life possible. The old idea that life just took advantage of an arbitrary set of physical laws is not plausible.
The vast majority of imaginable physical laws give rise to universes that are terminally boring. They quickly go to thermodynamic equilibrium = “heat death”, so that nothing can happen. Or they Bang and then turn around and collapse so quickly that there’s no time for anything interesting. Or they don’t support chemistry, or anything like it. Or they produce starlight that is too hot or too cold to interact with chemistry, so there’s no photosynthesis.
So why do the fundamental constants have the values that they do? Because if they didn’t, we wouldn’t be here to be asking the question. This has become known as the Anthropic Principle, and the particular values of the physical constants are called the Anthropic Coincidences.
Reasoning backward from the fact that “humans exist” — this isn’t the kind of logic that science is accustomed to. Can it be justified?
There are two ways to think about the Anthropic Principle. First, it may be that a pre-existing consciousness created the universe as a home for itself, and adjusted the laws and the numbers to make it work. The majority of physicists don’t like to think this way, because it sounds too much like “God created the world”, and they left religion behind long ago.
So, second: It may be that every logically possible universe “exists”, embodying every conceivable laws of physics, every combination of larger and smaller values of the constants that are part of these laws. There are lawless, chaotic universes, too. And there are simple, boring universes without enough mathematical structure to support complexity. Almost all these universes have no life, no people, no physicists to ask questions about the laws of nature. In this sense, it is no accident that we find ourselves in one of the extremely rare universe in which the numbers are just right to make life possible.
The choice between these two perspectives is a matter of taste. People who prefer “consciousness rigged the game” will say that a zillion universes violates Occam’s Razor, and the fact that they are unobservable makes the hypothesis unscientific. People who prefer the extravagance of the multiverse say that the whole idea of a res cognitans is unnecessary, and that the simplest explanation for the physical universe should invoke nothing outside of physics. I have taken my stand with the former.
Richard Dawkins is often quoted, “The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.” What universe does he live in? It’s more accurate to say that the universe that we observe seems to be created with life in mind, and the only way to avoid that conclusion is to turn the usual scientific conventions about causality on their head, and then invent an infinity of unobservable universes — all this in order to defend a pre-judged philosophical position that only physical matter is real.
There is abundant evidence for consciousness apart from brains. Almost all this evidence has been excluded from mainstream scientific discussion, precisely because it cannot be reconciled with the prevailing scientific paradigm. I have found it convincing, and I invite you to investigate it if you are open to these ideas. Some of my favorite sources, accessible and fun to read, are Surviving Death by Leslie Kean, Phenomena by Annie Jacobsen, Children’s Past Lives by Carol Bowman, and Extraordinary Knowing by Elizabeth Mayer.
Ian Stevenson was chair of the psychiatry department of University of Virginia when he began, in the 1960s, to investigate stories of small children, 2 to 5 years old, who told stories about having lived before. Because he had a reputation to uphold and he was out on a limb, he was particularly cautious and skeptical when he compiled evidence in his first book, Twenty_Cases_Suggestive_of_Reincarnation. He cited evidence that children came into the world knowing things that they had no way to know through their senses, unless somehow they were connected to people who had died before they were born. In America, the most famous such case is James Leininger, born 1998, now a young man. Stories of reincarnation provide anecdotal evidence for the notion that consciousness exists apart from the body, comes into the body temporarily at birth and departs at death.
Many popular books anthologize stories of near death-experiences. Typically, people have vivid experiences during times when they are clinically dead and there is no electrical activity in the brain. Is there a way to tell whether these are false memories, formed instantaneously in the moment when the brain returns to life? In a small percentage of cases, people accurately report details of their time in the operating room. They remember things the surgeon said and tools that were used, all as seen from above, floating over their lifeless bodies. In a smaller percentage of cases, people come back from NDEs having “visited” a near friend or relative, and they accurately report things that they learned while their consciousness was far outside the body.
Remote viewing is the crazy idea that people can perceive events and places that are far from their physical bodies, or in the past or in the future. Proponents suspect that we all have this ability to some extent, and we occasionally know things without any way we could have learned them through our senses. From 1977 to 1995, the US CIA recruited people who demonstrated special talent as remote viewers and engaged them to spy on the Soviet Union and elsewhere. Routinely, these people with special abilities were able to know things they had no business knowing. Occasionally, the detail and accuracy of their reports was so remarkable that no analysis of the probability of the event as a chance occurrence could do it justice. To this day, oil companies engage psychics to help them know where to drill, and archaeologists employ some of the same people to identify sites for excavation.
Perhaps you’ve read My Stroke of Insight, by Jill Bolte Taylor, or you have heard her TED talk. When half her brain was off-line, Dr Taylor experienced more consciousness, not less. It is suspected that DMT and other psychedelic agents shut down portions of the brain, and people who take these drugs report “expanded consciousness”, albeit with impairment of some mental functions. This could be taken as an indication that focus of the mind is narrowed by the brain, and that without the five senses we might expect to be aware of more “sixth sense” information.
If you dismiss all such stories as being too far-fetched to be worthy of your time, you are in good company. That was my position for the first 55 years of my life. But subsequent reading has convinced me that many of these stories are credible, and that they demand explanation beyond anything that our current science can offer.
Experiments that challenge the foundations of reductionist science are not the highest priority for institutions that have built their theoretical edifice on reductionist science. The legacy of the Princeton PEAR lab has continued at the Institute of Noetic Sciences and at the Society for Scientific Exploration, but this kind of research remains in a backwater. If I were in charge of funding at the NSF, my priorities would be different.
Experimental design is based implicitly on assumptions about reality. If we wish to challenge those assumptions with an experimental protocol, we are in treacherous territory. Interpretation of the results depends on other assumptions about reality.
How can we be sure which of our cherished but unconscious assumptions have been violated? Some of the assumptions that are built so closely into our thought process that we may invoke them unawarely:
- That something that happens later cannot affect something that happens earlier
- That if you do an experiment and I do exactly the same experiment, we should get the same result
- That things nearby are more likely to exert an influence than things far away.
We are looking to detect a direct effect of mind on matter, which seems as incomprehensible and implausible as violation of the above assumptions. Can we still do science if these assumptions are called into question? Yes, but we must be more careful in our experimental design and our thought process. Experiments must be physical, and from them we wish to draw specific conclusions about non-physical things. To create a clean experimental design is more challenging than usual.
I am proposing that psychokinesis is so reliable that it enables a soul to take control of a brain; yet the only quantum psychokinesis that has been demonstrated in a lab is a tiny shift from randomness in the intended direction.
My hypothesis is that the power of the mind is so weak because there is no skin in the game. There are no consequences to the result of the quantum random numbers, aside from a number on a computer screen.
How might we create real consequences for the experimental subject? One experimental design is inspired by the Frank Stockton’s short story. Our subjects would be presented with two doors. A quantum coin toss determines which door leads to a beautiful young maiden, whom the subject will immediately marry, and which one leads to a hungry tiger. My null hypothesis, of course, is that there will be a strong statistical bias toward the former outcome. I have proposed this excellent experimental protocol on numerous occasions, but thus far I have been unable to get approval from the ethics review board at the university.
While I’m waiting for IRB approval, I have thought of the next best thing. Hook the quantum random numbers to a Gro-Lite that shines on a potted plant. Let the plant perform the psychokinesis. A few years ago, I asked Brenda Dunne about this protocol, and indeed she and Dr Jahn had run this experiment at PEAR with robust results, but had never written it up.
I also learned that the PEAR lab had embedded quantum random number generators in a robot that executed a random walk. Children came to “play” with the robots by telling them, psychically “come to me” and the robots tended to obey. Again, these results were never quantified and published.
This kind of experiment has potential to provide evidence (or not) for strong influence of mind on matter, which is essential to my hypothesis.
Jason Jorjani has claimed to see telepathic knowledge about details of his own life in response to his queries to ChatGPT. If my paradigm is correct, then brains should be capable of telepathy, but computers should not. I have argued that “extraordinary knowing” is a quantum phenomenon, and that today’s computers, which operate in a deterministic, classical domain, should never display telepathy. So this is a proposal that could potentially falsify my central hypothesis. There are protocols (e.g. Ganzfeld) in which humans reliably demonstrate telepathic knowledge. My predictions is that, in parallel tests of AIs, the AIs will score consistent with chance
I propose that no deterministic computing machine can ever cross a threshold and become conscious. But this is not to say it is impossible to build a conscious robot. In my paradigm, living things are machines that leverage quantum events to control their operation, at a high level, and in interesting and complex ways. Consciousness comes to occupy a body and control it. We might build a machine to channel the intentions and realize the will of discarnate souls. The result would be something like a quantum Ouija board, channeling spirits with a machine that amplifies quantum randomness into messages and movements. If we create a machine that responds to quantum events in macroscopic ways that are interesting enough, we might tempt a spirit or ghost to enter the machine, as has been reported in seances.
I have cited Kaufmann and Radin’s research indicating that neurotransmitters are in a quantum superposition, such that they cannot respond reliably. I know of no one who has replicated Kaufmann’s work, let alone built upon it. The idea that brains are computers is firmly ensconced in the scientific and philosophical literature. Proving that brains behave in a fundamentally non-deterministic way should break that paradigm.
Lamarckian epigenetic inheritance has been well established just in the last twenty years. The experience of the parent is passed on to the child, such that the child is better able to cope with similar experiences in the future. James Shapiro collected evidence that, in bacteria, there is also Lamarckian inheritance of genetic characteristic.
If McFadden’s theory is correct, we should expect that quite generally, genetic mutations are not random, but respond to the experience of the parent — more speculatively even to the future needs of the offspring. This would not be a difficult experiment. For example, the Mexican Cave Fish has a blind variant that dwells in caves and a sighted variant of the same species that lives in open water. A simple experimental design would be start with blind fish and keep track of the frequency of sighted offspring among fish reared in darkness and fish reared in light.
The experiment that we would really like to do would distinguish whether conscious perception is necessary to collapse the wave function, or whether something like the irreversible amplification to the macroscopic level would have the same effect. Building the apparatus is easy enough; for example one might set up a Geiger counter with no one paying attention, so the clicks are like the proverbial tree falling in the forest with no one to hear it. Compare this to someone paying attention to the clicks and acting on the information that they provide. The problem with this experiment — the reason I believe it is impossible — is that whenever a measurement is made but the result is unknown, the consequence for the rest of the world is exactly as though the measurement had not been made. I know this is true of the canonical Bell pair of entangled particles, and I believe it is true generally. I cannot think of a way to experimentally distinguish whether it is consciousness or something more physical that collapses the wave function. If you know something about this subject and have an idea, please comment or write to me.
I believe the Hard Problem is hard because it is framed based on a false premise. Consciousness is not a product of a brain; consciousness has an existence more fundamental than physical matter. It was consciousness that brought the physical world into existence. I believe that, based on what we know, this is already the most economical hypothesis.
The most obvious weakness of this paradigm is that it is framed in terms of “consciousness” as a unitary phenomenon; yet we as individuals experience separate consciousness. If it is consciousness that collapses the wave function, can it be anyone’s consciousness? I can collapse the wave function for myself, but can I collapse it for you? If I make a measurement and tell you about it, does that count as “your” measurement? And if a disembodied spirit makes an observation, does the wave function collapse?
Bernardo Kastrup likens our separation to “dissociated alters” — a phenomenon which used to be called “multiple personality disorder”. According to Kastrup, we are all derived from Universal Consciousness via a process of forgetting akin to dissociated personalities. This is a topic that deserves its own essay.