previous ---- next


I believe neural networks are constantly battling for control, control of how we think and feel. They define the essence of who we are, our very personalities. Expert neurons monitor our environment and even our thoughts. A clown neuron can instantly turn us into a trembling, sweating, panicked chunk of protoplasm. Changing our neural networks is almost impossible.

How can an expert change this dreary picture?

. . . .

Steve Jobs was born in 1955, half a century after Heisenberg. Both men were geniuses. Heisenberg thought a lot about electrons. Jobs thought a lot about people. Heisenberg figured out where an electron might go. Jobs figured out what a person would buy – even before the person knew he wanted to buy anything.

If Steve Jobs had been an early twentieth century scientist, he would not have predicted the locations of electrons, he would have told them where to show up.

If you are living in the middle third of your life and money and finance is important, you might consider the advantages of knowing, before anyone else, what people will desire.

Jobs may help us with an age old problem – people may know they need our product or service, but insist on searching for reasons not to buy. Based on their fears and biases, they can always find many reasons.

Almost all business coaches and sales advisers talk about various techniques to allay fears, make yourself likable, recognize and overcome objections, and get a commitment.

What these advisers do not understand is that everyone, including you, have their own unique worldview. What you say to a prospect, in fact what you say to anyone, should be tailored to their specific worldview. And not just that. What you say should be tailored to your own worldview. You should say what you feel comfortable saying, what you think is honest and true. If you do this, you will sound more sincere, and likely, be more successful.

The hidden assumption of these experts is that you have never considered this advice and you can easily change your personality and quickly start using their suggestions. On extremely rare occasions, they will offer something you haven't considered.


If you are a great salesman, you are probably already automatically doing what the experts are suggesting. If you are a good salesman, you usually follow most of the advice offered. By working hard, you may be able to marginally improve and, by continuing to work hard, avoid slipping back to your old level (just good).

Expert advice is mostly a litany of truisms which are not necessarily true addressed to the average salesman detailing how to approach the average prospect. We are all unique - which is a good thing. There is no average salesman. There is no average prospect.

I cringe whenever I hear one often repeated truism "You only have one chance to make a good first impression". This advice puts a lot of pressure on people and may actually increase the likelihood of a bad impression.

Before the internet, you could wait a few years until you were forgotten and then try again for a good first impression. Or if you worked very hard and met a few thousand people, you could take the view that you have a few thousand chances to make a good first impression.

Today, with the internet, and a world population of several billion, you have millions of chances to make a good first impression.

If you are not a natural salesman, expert advice on salesmanship boils down to telling you to take up voodoo and mind reading.

Your reaction should not be to quit and give up. Your reaction should be to think outside the box. You should ask if you can change the rules.

How can you change the rules? Realize, first, the audience of prospects we have been discussing is limited. It may be one person you are talking to at a networking function. It may be twenty or thirty at a seminar that took time, effort, and money to put together. Then, realize the advent of the internet gives you the chance to reach millions when they are ready to buy. If ninety percent of these are bad prospects, it doesn't matter. Ten percent of millions is still hundreds of thousands of prime prospects.

. . . .

We have been using our newly invented microscope to study neurons and our minds to explore the much smaller quantum world. Now we can look up and observe our surroundings, or, use our telescope, to look up and out at the vast cosmos. We seem to be perched precariously on a spot in the middle, a spot between the very small and the very large.

When we look up, we see our world. We see green grass, blue sky, the sun, the moon, stars, everything.

Our scientists like to measure our world. The history of science documents their struggles to accurately measure many things, perhaps the most important being the speed of light. They believe this speed does not change – even if it is measured on the other side of the universe.

Our scientists, now astronomers, with their telescopes and other sophisticated instruments, look outward at the vast cosmos. They see a gazillion galaxies, each much like ours, moving away from us and from each other. This is viewed as confirmation of the Big Bang Theory where once everything we see “out there”, including space, started at a universal center that exploded. What we see today is the remnant of this initial explosion with galaxies still moving out and away. You might say that our scientists today want to view the structure of the cosmos as fingerprints from shortly after the Big Bang. I would like to find fingerprints from even earlier, maybe even from Never Never Land, but first we have to acknowledge problems with current theory. These fingerprints may be smudged.

. . . .

Albert Einstein was upset about the quantum concept of entanglement or as he called it “spooky action at a distance.” . This concept stated that two sub-atomic particles could be connected in such a way that changing one would cause a change instantaneously in the other, even if the two particles were separated by light-years.

If you really think about it, this entanglement or "spooky connection" might mean that we have a fundamental lack of knowledge and understanding of time and distance.

I remember a Simpson's episode that began, like most, with Homer and his family sitting on their couch. We had a close up of Homer's head. We then moved up and away, seeing first Homer's house, then clouds, then the entire earth. This moving away process continued until we saw our solar system, our entire galaxy, and then all the galaxies in the universe. As we continued to expand our view, structures appeared that looked like atoms, then we saw molecules and DNA. Finally, we could make out living cells and then we ended our journey back where we had started, looking closely at Homer's head.

The Simpson writers were trying to give us a unique perspective on reality. They may have been closer to the truth than they realized. I wonder if they recognized what they were saying about the nature of size.

Not understanding what distance means is the same as saying we don't understand what size means. To illustrate, if we walk for one hour at 3 miles per hour, we will travel three miles. If we then turn right 90 degrees and travel for another hour at the same speed, turn right 90 degrees again, continue walking at the same speed for another hour, and then, finally, turn and, at the same speed, return to our starting point, we would have walked for four hours and traveled 12 miles to return to the same place. If we traced our steps, we would have laid out a square, each side being 3 miles long. The area of this square would be nine square miles (3 miles X 3 miles = 9 square miles).

You do not need an advanced degree in mathematics to understand.

Now, if we board a helicopter and slowly, at a rate of 3 miles per hour, rise vertically for one hour, we would arrive at a point three miles above our starting point. If, traveling at the same speed, the helicopter now moves horizontally and follows our earlier walking journey, it will, four hours later, arrive back at the point three miles above our starting point. Again, a square with an area of nine square miles would have been laid out, this one identical to the first, but separated by three miles. In fact, if we connect each point on this square where we changed directions to the corresponding point on the ground with three mile long lines, we will have constructed a cube. The volume of this cube would be twenty seven cubic miles (3 miles X 3 miles X 3 miles = 27 cubic miles). In this case, size and volume are the same thing. The size of the cube is 27 cubic miles.

What can you say about the size of this cube? To us, it seems pretty big. To one of the living cells we have been discussing, this cube appears to be almost infinitely large. On the other hand, compared to the known universe, this cube is infinitesimally small.

Can we say any more about this cube? Maybe Albert Einstein, who hated "spooky action at a distance", should have thought a little longer and harder about one of his thought experiments.

. . . .

One of the more popular theories of why our universe appears as it does today flows from the Big Bang. Initially, the universe was infinitesimally small and infinitely dense. All the matter in the universe was confined to an infinitesimally small space – making this matter not only infinitely dense, but also infinitely hot. I am not sure modern science knows what it means by these terms: infinitesimally small, infinitely dense, infinitely hot. Perhaps they just mean the values are too high to calculate, no instrument they have or could imagine could make the needed measurements.

. . . .

When I was trying to understand relativity, I read about what is called the "Twin Paradox" which was an Einstein thought experiment that involved one person traveling in a rocket at a high rate of speed (almost the speed of light). His twin, who remained on earth, would, according to Einstein, age faster than the space traveler. I think, after some effort, I understand what Einstein was saying. If I do understand correctly (quite honestly, it makes my head swim to think about it), I can modify this space journey to address our current subject, size.

If you would like to google "Twin Paradox" you can try to figure out what is being said, but I would like to set up another space journey to make a comment on size. Imagine a rocket leaves earth traveling to a planet circling one of the three stars in the star system, Alpha Centauri. This planet is four light years away.

On this far away planet, we have an older observer who spent half his life traveling to this distant point. He watches carefully through a powerful telescope as our large rocket, piloted by a brave astronaut, blasts off from earth. A second observer, who will stay on earth, also watches the proceedings.

As we have said before, time seems to be important. The astronaut and the two observers are habitual clock watchers and each demands his own personal, extremely accurate clock. Before the astronaut or the first observer leave earth, all three clocks display identical times.

Some of our fellow earthlings have previously placed twelve, three mile long, straight steel pipes in earth orbit. Using the cube we traced out above as a model, they have constructed a real, orbiting cube that has a volume (size) of twenty seven cubic miles. Our astronaut, using several cables, attaches the cube to the rocket, and then heads for Alpha Centauri.

Our rocket is capable of accelerating continuously so that the astronaut will feel as if he weighs the same as he would on earth (the astronaut would feel he was being pulled toward the earth from which he was rapidly retreating).

If you asked the earth observer, two years later, about the rocket, he would say it is moving away from him at almost the speed of light. The rocket, the astronaut , and the cube all display a "red shift" which means the rocket is rapidly receding (Edwin Hubble, the famous American astronomer had observed a red shift in the light from far away galaxies which told him these galaxies were moving away and thus the universe was expanding). The astronaut's clock is running slow. The four steel pipes of the cube that are pointed toward earth are compressed, with a length of perhaps 1.5 miles rather than the 3.0 miles you would expect. If you calculate the size of the cube, it has shrunk from the original 27 cubic miles to 3 X 3 X 1.5 = 13.5 cubic miles.

Now let us turn to the second, remote observer. Since when I start thinking about the "Twin Paradox" I start getting confused, I will not ask what the second observer sees "at the same time". Instead I will note that the earth observer is looking at the rocket when it is a quarter of the way to Alpha Centauri and ask what does the second observer see when he and the observer on earth believe the rocket, cube, and astronaut have made one fourth of the journey?

The remote observer would see that the rocket, astronaut , and cube all are displaying a "blue shift" which means these objects are approaching rapidly. The astronaut's clock is running fast. The four steel pipes of the cube that are pointed toward Alpha Centauri are expanded, with a length of perhaps 4.5 miles rather than the 3.0 miles you would expect. If you calculate the size of the cube, it has grown from the original 27 cubic miles to 3 X 3 X 4.5 = 40.5 cubic miles.

Let us stop at this point and make a few observations. To the astronaut, time is passing normally, and the cube, the rocket, and he, himself, are all unchanged. To the observer on earth, the astronaut's clock is running slow, and everything observed is slightly smaller. To the remote observer, the astronaut's clock is running fast, and everything observed is slightly larger.

If the astronaut makes the cube rotate (the connections between the cables and the cube could be modified so that the cables wouldn't get tangled up), the size of the cube will change as its orientation to the remote observers change. To the observer on earth, the cube would rotate slowly and its size would change slowly. To the other observer, it would rotate rapidly and there would be a rapid, periodic, size change.

What happens if the astronaut, at this point, shuts down the rocket engine? The astronaut, rocket, and cube will be suddenly weightless. All three would stop accelerating, but their velocity and speed do not change. The speed, from the standpoint of the remote observers, is approaching the speed of light. The red shift or blue shift observed would not be affected. I may be wrong, but I suspect that any size changes observed by the remote observers would also not go away.

There is one thing that might be useful to realize. On the other hand, it may, if you are like me, just contribute to a feeling that you are not sure you understand relativity. Both remote observers will not know anything about the astronaut (like that he has shut down the rocket engine) until light reaches them. The astronaut is at a point where he has traveled one light year (the distance light travels in a year) and has three light years left to travel. The earth observer will not know that the rocket engine is off until a year has past. The Alpha Centauri observer will have to wait three years. And don't forget, the two observers did not start counting at the same time - there is no same time.

If the astronaut does not shut down the engine, the rocket will be traveling at nearly the speed of light when it reaches the remote observer. In fact, we could get the rocket to come as close as we would like to light speed by assuming the rocket had had longer to accelerate. Instead of starting from earth, the rocket could have started its journey on the other side of the galaxy.

In any case, you could imagine the cube being the size of a galaxy as it passes the remote observer. It might be that only one atom from the astronaut's left thumb would pass close enough to be detected. Yet, as soon as the rocket passes, the whole configuration (cube, rocket, astronaut) becomes microscopic.

There is one major problem with the mental exercise I have presented. There may, in fact, be other problems. With relativity, as with other things, the devil is in the details. Nevertheless, I hope that much of what I say is valid and has value.

The problem I recognize is we have been thinking in three dimensions. As the rocket approaches the remote observer, two of the three dimensions of the cube remain a constant three miles. The third dimension has expanded, and with it, the volume or size of the cube. The cube has grown, but in only one direction. It has been transformed into a very long rectangular prism.

If Einstein had done this thought experiment, he would have realized that not only is time relative to the observer, but also size.

. . . .

Einstein's revolutionary idea was that different people may view time as passing at different rates – not to say this was an illusion, it really did. But, as we sit here in the middle between large and small, we need to remember that time is relative in other ways.

Early in recorded history, scientists formed groups. Among other things, these experts said the flat earth was the center of the universe. As time passed, these experts' views of reality, how things really were, would suddenly change. Some revolutionary would have successfully attacked the old view. The old guard always resisted, sometimes with violence, but were quickly replaced.

The new guard, now the experts, were always know it alls. The time they lived in, their now, was special. Their ideas, which explained the wrong views of the past, were supreme. Any other view, any other theory, was idiotic.

And so it is today.

Our science today is built on a long history of scientific thought and research. Except it is not. Long, that is.

Remember our imaginary journey across the country, where a metal pole in New York City represented year one when life began, 1500 million years ago, and a metal pole in Los Angeles represented today? We would only see dinosaurs when we reached eastern California. All of our recorded history would occur within a foot of the western pole. Our long history is no time at all.

. . . .

Planck's Length is the shortest distance that has any meaning in our physics. Planck's Time is the interval of time needed for light to travel a Planck's Length. Since no length can be shorter than a Planck's Length, no interval of time can be less than a Planck's Time.

Our Science believes time sprang into existence, along with our universe, in what is called the Big Bang, about 14 billion years ago. A few gazillionths of a second later, the space in our universe inflated from a volume measured in Planck units to something so large that we might could have seen it with the naked eye. Our universe grew to the size of a grain of sand in a few gazillionths of a second – implying the the velocity of growth, that is, the speed of the universe's “surface” was traveling “outward” at many times light speed. If you believe there is nothing beyond our universe, the words surface and outward lose meaning.

I can't help but notice that although our time could have started when the universe was the size of a super-microscopic dot, it didn't have to. Could time, along with our universe, not have begun until grain of sand size? What would be the effect on the fancy Dan equations of quantum physicists? It would still be pretty hot inside a grain of sand that contained all the energy and matter we see around us, but not the heat envisioned at the very beginning by our science.

If time, matter, and energy did not show up until everything we know was the size of a grain of sand, we would no longer have to deal with infinities, like infinite temperature. Our math has problems dealing with infinities and these problems would go away.

How does this change our speculations about the birth of the universe. If we started as a grain of sand, we would still have a Big Bang – it would just be a slightly smaller Big Bang. Instead of seeing all the galaxies fleeing an infinitesimal, infinitely dense dot, they would be fleeing a grain of sand. Must have been one scary grain of sand.

Since everything is connected, could the birth of our universe have any similarities to births we see in our world?

In humans and many other animals, a germ line is established early in fetal development. All other cells are somatic – their chromosomes contain two complementary strands of genetic material – one strand from the mother, one from the father. Somatic cells are usually differentiated to perform particular tasks – a heart cell is not a liver cell.

Germ line cells can eventually pass on genetic material to offspring. Of course, this process is time dependent. Once the animal is sufficiently developed, these cells can become sperms or eggs through a process called meiosis. It is less important to know that each cell can become four sperms or four eggs. It is more important to know that the sperms or eggs contain half of the genetic material needed for a new animal. This genetic material (in humans) is in the form of 23 chromosomes – long single strands of DNA from the male or female. It is also important to note that this single strand is not unchanged, but rebuilt from both strands – the eventual baby may have traits from the grandparents. When and if a sperm and an egg combine, their chromosomes will combine into double strands of DNA, with half the genetic material from the male and half from the female. From this single combined cell, a new individual will develop and eventually a baby will be born.

The process from germ line to baby is complex and long, taking a generation from the establishment of a germ line to a new adult. Double strands of DNA have to be split and packaged into sperms or eggs, then at conception, the single strands have to be recombined into double strands. These double strands are then split many times into multiple double strands until we have an adult with a few trillion cells.

Our science likes to describe this whole process in exquisite detail, but they have no idea about what controls the details of these procedures. If they address this at all, they say cells act automatically, that each of millions of steps is, to use a computer term, programmed. A religious scientist might say God did the programming.

You can build a different scenario if you consider each cell a small quantum computer, self aware, and wanting to carry out its important task. Even if this wild speculation is accurate, the cell would not have enough information. It would have to be part of a neural network where “spooky action at a distance” is possible. It would have to have tentacles into Never Never Land where the potential for everything, including instructions for life, have always and forever will, reside.

Maybe you find this speculation absurd, or interesting, or both. We should remember to look for support bleeding through to our world. We should also remember that we are looking for fresh ideas about Never Never Land.

Suppose you were minuscule, standing at the base of one of your 23 chromosomes, in one of your trillion or so somatic cells. This chromosome is made up of two DNA strands – each strand is called a polynucleotide, being composed of many smaller units called nucleotides.

When I say you are minuscule, I mean you are small enough to see molecules and even atoms. When you look at this chromosome, you see double strands form what looks like a ladder rising and twisting into the sky. In the distance you see several more chromosomes rising toward a distant moon.

The left and right sides of this ladder (one side from the father, the other from the mother) is made of a sugar called deoxyribose and a phosphate group (many phosphorus atoms each attached to four oxygen atoms). Each rung of the ladder is made of two nitrogen-containing molecules, either adenine and thymine, or cytosine and guanine. We can refer to adenine as “A”, thymine as “T”, cytosine as “C”, and guanine as “G”.

When we look at the left side of the ladder (the side plus the left half of each rung), we are looking at material from one parent. When we look at the right side of the ladder (the right half of each rung plus the other side), we are looking at material from the other parent. Remembering our “ATCG” code, we could, for example, describe the first ten rungs we see as AACATTGGAT.

We are looking at the chromosome as if we were the height of three rungs – AAC. This chromosome twists and rises into the sky. This chromosome, this ladder, may have almost a billion rungs. And there are twenty two other similar chromosomes in this cell – in fact, in the nucleus of this cell.

Our biologists are very proud that they have finally figured out that chromosomes are giant code books. A group of rungs can define a gene. The cell uses genes to build proteins which are needed by the body – or, more accurately, they are used to build the body. Although you can be amazed that the cell knows how to manufacture and transport proteins, I think it argues for the cell having the power of a quantum computer. It needs this power to decide what protein is needed and when it is needed as well as knowing what gene to access. Each chromosome contains thousands of genes – one of which, to use an often quoted example, might mean you have blue, rather than brown, eyes. Could you look at this ladder and see how it might become you?

Genetics is a complex, amazing subject. Maybe something similar, but more complex, goes on when there is no time. Can we imagine a genetic code in Never Never Land?

How does a code define time, or mass, or energy, or something else we are not aware of? What is a different time like, or even mean? In our universe, time is related to mass and energy. Is a relationship always required?

We like to say that mass is equivalent to energy – after all, we can convert a small amount of mass to energy and get an atomic bomb. Would it help to think of time as another form of energy? Maybe converting one second to energy would blow up half the universe – this gives a whole new meaning to the term “time bomb”.

When we speak of genetics, leaving out many details, we can say that a cell decides it needs a protein, searches a chromosome for the needed code, then builds the protein. This takes time – how could you do something similar in Never Never Land where there is no time?

What if certain genes in Never Never Land code for time particles. The genes that are related to our universe gives us our time. Other genes may give other universes their times. This is similar to what happens in human cells – all genes are there, but one group may be active in a liver cell, while another group is active in a heart cell.

Time in other universes may be different than the time in our universe in ways we can't image, but all times could have properties in common – all based on timeless genes. A common property that I can image, and the one I would like to explore more, is what I call the Magic Now. It holds the universe and us tightly in its grip – we can not go back ten seconds or ahead ten years. We are here, Now.

. . . .

I have a theory. I have a theory that I hope can be supported by observations of nature, the nature we see around us. Speaking of observations, this theory is based on two observations. First, Man discovered quantum physics about ninety years ago. We can say that Man has had about a century to figure out quantum physics, to know enough to be able to build practical devices like computers. Second, Life has existed on Earth for 1500 million years. This is 15 million times as long as Man has known about quantum physics.

I have a theory. I have a theory that Life discovered quantum physics long before Man existed. Life discovered that quantum physics could help its creatures survive, that quantum physics made thinking and being self aware possible, and creatures with this trait had a better chance of survival than whatever creatures had come before.

. . . .

Now is a good time to discuss what we know about quantum computing and how close we are to creating a useful, working quantum computer.

The computers we have today (actually, any device containing a computer chip) are almost exclusively dependent on silicon. This silicon has been grown into a crystal structure. A crystal structure is composed of groups of atoms arranged in a particular way. In the case of silicon destined to be in a computer chip, the groups of atoms are periodically repeated in three dimensions on a lattice. At this point, the lattice can be made up of billions of groups. The lattice is sliced into discs called wafers. Intel's web site explains how this process continues: Chips are built simultaneously in a grid formation on the wafer surface - A chip is a complex device that forms the brains of every computing device. While chips look flat, they are three-dimensional structures and may include as many as 30 layers of complex circuitry.

Each of the billions of groups (we can call them "bits") can have a value of "1" (on) or "0" (off) and the values of all the bits can be controlled by the complex circuitry. Modern computers can process (use in calculation or move from one location to another) billions of bits almost instantly. The key word is "almost". When we start to need to process huge numbers of bits, like maybe a million billion bits, we start to need a quantum computer.

With modern computers, we talk about bits. With quantum computers, we talk about qubits. A qubit can have a value of "1" (on), "0" (off), or a third value which may be viewed as a mixture of the first two. I will call this value "maybe".

I think and hope I can explain quantum computers without fully understanding what a qubit is. If I had access to a friendly quantum computer expert, he might help me gain a better understanding. Lacking this access, let me describe why I am confused.

My confusion is rooted in, well, quite frankly, it is hard to imagine anything more confusing than quantum physics. For example, there is a quantum concept called superposition.

The University of Waterloo is a highly ranked Canadian University. From Its Website: Superposition is essentially the ability of a quantum system to be in multiple states at the same time — that is, something can be “here” and “there,” or “up” and “down” at the same time.

One of the things that started quantum physics in the first place was no one could determine exactly where an individual electron would be. If you fired an electron at a metal pole, the electron might hit the pole, miss the pole, or glance off the pole at different angles. One could only give probabilities of where an individual electron would go.

Then there was the famous (to quantum physicists and lay people with an interest in science) Schrodinger's cat experiment. As you remember, Schrodinger's cat was an imaginary cat that Schrodinger, a renowned physicist, put into a box. Then a random, unpredictable event either killed the cat or left it alive. The only way to know if the cat is alive or dead was to open the box. Quantum physics says that, before the box is opened, the cat is in some kind of limbo, maybe both alive and dead, or half alive and half dead, maybe in the "maybe" state I mentioned above.

I also need to introduce one more quantum concept that applies to quantum computers, decoherence. A qubit, unlike a digit, is a quantum concept. Complex electronic circuitry that helps define modern computer chips can easily determine the value of a digit, whether it has a value of "1" (on) or "0" (off). A qubit, on the other hand, is very shy. If one even looks at a qubit (for example, with electronic circuitry), it will lose its quantumness. When a qubit loses or falls out of its quantum state, quantum scientists say it has decohered. Quantum scientists do not want qubits to undergo decoherence. They seek to make practical use of qubits without looking at them.

Qubits are confusing. I think you can say that if you have just one qubit, you can do calculations – very fast calculations. If this qubit decoheres, however, it will have a definite value, either “0” or “1”. No calculations could be done – all answers would be either “0” or “1” .

If you had two qubits, you could also do very fast calculation. How fast? Since we are in the quantum world, I am not sure our math works well there. I am not comfortable saying that a quantum computer based on two qubits is twice as fast as a quantum computer based on one qubit.

If our two qubits decohere, each has a specific value - “0” or “1”. In this case, our two qubits would become equivalent to one bit in a normal silicon based computer. The speed of calculations would be equal to or less that our normal computers.

We also need to discuss one more quantum concept. This was a quantum idea that really made Albert Einstein mad. Einstein's theories and thus his reputation was built on the idea that the speed of light was constant and nothing could go faster than the speed of light. Einstein believed that any scientist that suggested that something could go faster than light speed was not a scientist; he was an idiot.

So Albert Einstein was mad when a quantum idiot introduced the concept of Entanglement. From the University of Waterloo's website: Entanglement is an extremely strong correlation that exists between quantum particles — so strong, in fact, that two or more quantum particles can be inextricably linked in perfect unison, even if separated by great distances. The particles remain perfectly correlated even if they are light years apart. The particles are so intrinsically connected, they can be said to “dance” in instantaneous, perfect unison, even when placed at opposite ends of the universe. This seemingly impossible connection inspired Einstein to describe entanglement as “spooky action at a distance.”

My next task is to tie these quantum concepts and my thoughts to current work on achieving functional and useful quantum computers. Almost everyone agrees that it would be good to have faster computers. It is not as easy as one would think to describe how research in this area stands, especially without getting into mind numbing math related to exotic systems (what kind of systems? - varied and complex).

My first step was to read an article published by Wired Magazine entitled "The Revolutionary Quantum Computer That May Not Be Quantum at All". This article ties a small Canadian Company, D-Wave Systems, Inc., to Google, IBM, and Conagra Foods, three very large companies. D-Wave claims to have developed a computer chip containing 512 qubits. This chip is the heart of D-Wave's Quantum Computer. As the title of Wired's article implies, there is controversy over whether or not this is really a quantum computer, but Google was impressed enough to buy one, probably paying D-Wave about ten million dollars.

To test the power of the D-Wave computer and perhaps determine if it was a quantum computer or just a regular computer, it was given a problem that normal computers regularly solve. IBM has a program called CPLEX. ConAgra uses this program to crunch global market and weather data to find the optimum price at which to sell flour. CPLEX running on a normal Intel Chip based computer took 30 minutes to find the answer. The D-Wave found the answer in less than a second. It was 3,600 times as fast.

The D-Wave computer is a special purpose computer. It has to be programmed to accomplish a limited number of tasks. A cell phone is a general purpose computer - it can take pictures, send emails, text, play games. We need, but are a long way from having, a general purpose quantum computer.

Reliable quantum computers would revolutionize research in many fields and lead to numerous technological advances. I can mention some, without mathematical references, or even long winded definitions. Quantum computers could break the security keys that keep our financial transactions safe as we buy and sell on the internet. To make up for this, quantum computers could develop new security keys that could never be broken. Quantum computers could process possible reactions between molecules and atoms, forecasting new and better superconducting materials that operate at room temperature - short winded definition: superconducting materials can conduct electricity over a long distance with no resistance and thus no loss of power. Calculations performed on a quantum computer can theoretically take seconds to complete, while the same calculations performed on a silicon based processor could take years.

If we step back and observe the world we live in, it is reasonable to suspect that only neural networks made up of neurons, each with the processing power of a quantum computer, can explain how so much information can be processed quickly enough to control the vital processes of life (this is true if the neural network – mind belongs to a man or a bumblebee).

To understand our current status regarding quantum computing, I want to review in more detail what quantum scientists, in general, are doing, and, in particular, the progress and setbacks of the D-Wave scientists since the company began early this century.

The current, primary goal of quantum scientists is to develop reliable qubits. To quote again from the IQC Website"... we need qubits that behave the way we want them to. These qubits could be made of photons, atoms, electrons, molecules or perhaps something else. Scientists at IQC are researching a large array of them as potential bases for quantum computers. But qubits are notoriously tricky to manipulate, since any disturbance causes them to fall out of their quantum state (or “decohere”). Decoherence is the Achilles heel of quantum computing, but it is not insurmountable. The field of quantum error correction examines how to stave off decoherence and combat other errors. Every day, researchers at IQC and around the world are discovering new ways to make qubits cooperate.".

The IQC Website also reports that a joint effort between their researchers and scientist from MIT have produced a (computer) experiment where twelve qubits are used together. This is a world record - at least, according to IQC. I will call this the "establishment view". D-Wave Systems claims that the computers they are selling each have a chip that contains 512 qubits. Their newest chip has more. Obviously, D-Wave scientists are not part of the establishment.

The D-Wave computer is a ten foot high black box designed to keep the tiny quantum chip within very cold. In fact, the temperature is close to absolute zero. The reason cold is needed is that almost anything (heat, vibration, electromagnetic noise, a stray atom) can cause a quantum system to undergo decoherence (a bad thing). The heart of the chip is a group of 512 niobium loops which when chilled sufficiently exhibit quantum-mechanical behavior, becoming, in effect, qubits.

Niobium is a soft, gray, ductile metal. The niobium loops are small by our standard, less than a millimeter and barely visible. They are huge, however, relative to computer components. D-Wave selected them for its computer, in part, because they could easily be mass produced by a regular microchip fabrication laboratory.

When a niobium loop is cooled to almost absolute zero, two magnetic fields form that run around the loop in two opposite directions at the same time. In physics, electricity and magnetism are the same thing - the fields can be interpreted as electrons in a superposition state (a quantum concept). If the loop exhibits other quantum properties, it can serve as a qubit. The qubits on the chip can, D-Wave hoped, be connected by quantum tunneling (more on this later) and entanglement. The wires needed to connect the components on the chip and the optical fiber cable that transmitted information to the outside world were engineered to stave off decoherence.

When the D-Wave computer was being designed, most establishment scientists were pursuing a "gate model" quantum architecture where qubits were placed on a chip to form standard logic gates (ands, ors, nots) like those on regular computer circuits that assemble into how a computer thinks. The D-Wave scientists decided to pursue a different architecture - one they felt would lead to a more robust chip, that is, less subject to decoherence. It would still allow their computer to solve optimization problems which were very important, but they would not have a general purpose quantum computer.

The question was "Did they have a quantum computer at all?". And so the fun began.

In 2010, D-Wave landed its first customer, Lockheed Martin. In 2013, Google and NASA were potential new customers of D-Wave. NASA wanted to determine the best route for its Mars Rover to follow as it explored Mars - a classic, difficult optimization problem that could be handled by D-Wave's computer. Before they would buy, however, Google and NASA demanded benchmark tests.

A benchmark test, in this case, is basically running a computer program on one computer - a standard desktop silicon based computer, and then running the same program again on another computer - the D-Wave quantum computer. If the D-Wave quantum computer ran the program faster and finished much more quickly than the standard computer, it was most likely really a quantum computer. If, on the other hand, for whatever reason, its niobium qubits had suffered decoherence, D-Wave's computer would have become a very expensive, standard desktop computer. Its performance in a benchmark test would not be substantially different from the performance of the silicon based computer.

Three benchmark tests were performed. One, mentioned above, used the IBM CPLEX program. D-Wave passed with flying colors. For the other two benchmark tests, the results, unfortunately, were less clear cut - but Google and NASA bought D-Wave's machine.

Establishment quantum scientists with their 12 qubits experiment had been skeptical of the D-Wave 512 qubit claim. With the Lockheed Martin purchase in 2011, the establishment could, for the first time, run unbiased benchmark tests. Most of these tests showed that standard, non-quantum, special purpose (not general purpose) computers would perform just as well as the D-Wave. It is worth noting that some of these results were available when, in 2013, the Google - NASA purchase was made.

D-Wave responded that the benchmark tests were really not unbiased. They had reasons why the tests were unfair. In addition, the establishment scientists had been very thorough, running many tests. On some of these, the D-Wave computer had performed well. D-Wave scientists wanted to know why - did these particular tests show less decoherence. Could these results help them make their quantum computer better?

D-Wave still has the support of Google and NASA. They now have a D-Wave 2X Computer which sports 1,097 Qubits. From a Tech Times article published in December 2015: "Google Director of Engineering Hartmut Neven said that what the D-Wave 2X can process in a span of a second is something that a single-core classical computer can solve in a span of 10,000 years.". The establishment immediately, enthusiastically, and vehemently attacked the claim.

This war about the number, kind, and uses of qubits is very informative. When you look at the big picture, however, you may feel, as I do, that we have only scratched the surface. We need to know a lot more before we can hope for major advances in the field of quantum computing.

For 1500 million years, Life has been moving molecules, atoms, and maybe even sub-atomic particles around, with the, perhaps blind, goal of helping each individual creature survive and reproduce.

Quantum physics has dominated the physical sciences for almost a century. Vast amounts of research dollars have been spent. Somebody, somewhere, must think quantum physics is important, valid, and worth the research dollars that have been allocated.

A number of years ago, I read about a "Scanning Tunneling Microscope" (I am sure it cost more than the fifty dollar microscope one could get at a toy store). This microscope was owned by IBM. The word "Tunneling" in the name refers to a quantum concept. From Wikipedia: Quantum Tunneling refers to the quantum mechanical phenomenon where a particle tunnels through a barrier that it classically could not surmount.

The significant part of the microscope was a very sharp electrically charged needle. In fact, it was so sharp that the point of this needle was a single atom. The rest of the microscope was designed to let an IBM researcher move this point (atom) over the surface of a sample. Electrons would flow from the point (atom) to an atom on the surface of the sample. By moving the needle back and forth while monitoring the electron flow, the IBM quantum scientist could show the atoms that made up the surface as they rose and fell, forming peaks and valleys. This was a map of the surface, but the power of the microscope was high enough to see single atoms.

One more thing about this particular Scanning Tunneling Microscope was that the IBM quantum scientists figured out how to move some of these surface atoms. Just to impress us, they moved a couple of dozen atoms around and spelled out the word "IBM".

When we look at particle physics, we see an IBM researcher move twenty atoms to spell "IBM", while within his body, Life moves billions of atoms so that the researcher can stand there, look at his work, and think about how smart he is.

By the way, the term “particle physics” is very similar to “quantum physics” and we could usually get away with using either in most cases. In general, a particle physicist usually works at a with a large accelerator viewing sub-atomic particle reactions while a quantum physicist might work at a small laboratory studying atoms, probably using lasers. Both have to keep quantum effects in mind.

When we look at quantum computing, we see scientists battling each other over how many qubits they have created, while having no real understanding of even what a qubit is. They seem a long way from understanding their mysteries and being able to build practical devices that work in the "real" world. Maybe if quantum computer experts would turn the limited, working tools they have on living cells, they could learn enough to make real progress.

Everything is connected. When we view the microscopic world, the connections are obvious. We always see the same germ when large groups of our fellow humans lay suffering with the same symptoms. With more powerful instruments, we can see a polio virus – obviously intimately connected to my life.

We can also look upward. Even without a telescope, we can see that the sun is connected to us in many ways. We can look further away, but for now we need to stay in our own neighborhood, our own solar system.

. . . .

I want a number of experts who are not like me to read my writing. The first step may be to get someone who is like me to read my writing.

previous ---- next