WHY WOOD BEES CAN'T SELL SQUAT - THINKING LONGER AND HARDER

(Please Forward to anyone interested in a different view of Sales) - Copyright © 2016 by Mike Stewart

I was sitting on my son's patio one warm spring afternoon. The floor of the patio is concrete. The sides are open to the outdoors. Wooden columns support wooden beams that make up the ceiling. The ceiling is designed to protect you from the sun, but there is space between the beams open to the sky. All the wood, whether column or beam, is unpainted and weathered grey. This wood was important to our local neighborhood wood bees.

Wood bees, also known as carpenter bees, are common in our area. Many people consider them pests, to be poisoned or killed in other ways for a couple of reasons.

First, the wood bee looks like a bumble bee, which can sting. The wood bee, especially the male wood bee, is very aggressive. Although he cannot sting, no human likes to have an angry bee buzzing around his head.

Second, wood bees damage wood, especially unpainted patio wood that is weathered grey. When my son, Michael, bought his house, and with it, his patio, he and wood bees became mortal enemies.

The female wood bee finds and expands holes in wood. Or she chews new tunnels. She then collects pollen, in the usual bee way, puts it in the tunnel, and lays her eggs. The eggs hatch, the young bees (larvae) eat the pollen, eventually becoming new, adult wood bees. Over time, enough tunnels can weaken beams of patio wood to the point of collapse.

That warm spring afternoon twenty or thirty wood bees were patrolling the patio. They were behaving exactly like an emptimologist (hereafter, bug scientist) would predict. I knew some of the ways my son had tried to discourage the bees - spraying wood with poison, painting over the tunnels.

But suddenly, I knew Michael was going to try a new tactic. He came onto the patio carrying a tennis racket. He promptly served one of the bees well into the backyard. Humans 15, Bees Love.

I was impressed, but I don't think the surviving bees felt the same emotion. Also, I don't think any Bug Scientist has ever predicted how they would react. The Wood Bees seemed to quickly come to the conclusion that "discretion is the better part of valor".

After the tragic death of one of their members, the wood bees kept close watch on Michael. Whenever he approached, they would fly close to the wooden columns or up, between the wooden beams. He couldn't get a clear swing. I believe, at the end of the day, the score was still Humans 15, Bees Love.

A Bug Scientist might want to investigate the persistence of this phenomenon. He might want to know if the tennis racket would be remembered tomorrow, or next month, or next year. Yes, I know, experiments have shown this kind of experience is not inherited across generations. But quantum physics indicates to me that it never hurts to ask.

I, however, want to pursue a different question: What goes on in the mind of a wood bee?

Before we can explore the mind of a wood bee, we must first admit to the possibility that a wood bee has a mind. You may find it possible to believe I have a unique worldview and you have a different, unique worldview. It is, however, probably more difficult to believe both worldviews are equally valid. Or invalid. But it requires a quantum leap to accept that a wood bee has a worldview, and this worldview is just as valid, or invalid, as yours or mine. If quantum physics hadn't encouraged me to see the philosophical nature of science, I don't think I would have seen the evidence that is everywhere around us.

The bee brain is made up of about one million neurons. The human brain is made up of about 100 billion neurons (about 100,000 times as many as are in the bee brain) and many more neuroglia (or glial cells) which serve to support and protect the neurons.

When we have a group of neurons as in the human brain, the subject of neural networks arises. We need to consider the many different combinations of "excited" neurons that are possible, whether these neurons are in bee or man. I would like to have this discussion, in part, because of the current belief that our memories, thoughts, and even our being self aware, is a product of neural networks. I am going to use the term "neural networks", but what I say will also apply if it is later discovered that glial cells play a greater part than we now think. I don't know if bee brains contain something like glial cells (bug scientists may know), but it doesn't really matter. In fact, what I say would apply if it is discovered (by whatever scientist that studies really, really small things) that large groups of similar molecules within a cell are behaving in a neural network like manner.

Such a discovery might support a theory that individual cells could be self aware.

Let me clarify the last two sentences. As I have previously said, cells like neurons are really HUGE. I gave the example, which I read somewhere, that if the diameter of a human hair was expanded to the height of the Empire State Building, a DNA molecule (which is made up of a large number of atoms and is usually found within the nucleus of a cell) would be the size of the toenail of a small dog sitting in the lobby. The "similar molecules within a cell" that I am talking about above are much smaller, made up of a few dozen or a few hundred atoms. The scientist mentioned would have to go into the lobby, pet the small dog on the head, sit down at a table and peer through his microscope to see them.

When I talk about neural networks, I want to talk about very large numbers with minimum use of numbers. This isn't as strange as it seems. I don't think any of us can really conceive of what 100 billion neurons means except that it is lots and lots and lots and lots of neurons. On the other hand, the one million neurons that make up the bee brain is just lots and lots of neurons.

Most experts on the structure of human neurons are not experts on neural networks. This is slowly changing.

Technology has now advanced to the point where a scientist can tell when an individual neuron is "excited". I want to relate an experiment I read about, but first I want to define "a scientist" and "excited".

In the case of this experiment, "a scientist" means a group of scientists standing around a patient or volunteer (for convenience, I'll call him the guinea pig). These scientists have access to some expensive equipment and have either attached electrodes to the scalp of the guinea pig or inserted probes into his brain. I don't know if these scientists are experts in a soft science like psychology or experts in biology. I don't know how much they know about human neurons or neural networks. I am pretty sure none are experts on bee neurons.

A neuron is "excited" when it receives input from another neuron and in turn transmits a signal. This signal is electrochemical in nature and is very weak. Electrochemical means the signal takes place in the neuron (actually in a part of the neuron called the axon) and is based on chemical reactions that results in a very weak electrical current. Our equipment is now capable of recording the strength of this signal.

Now to the experiment: The guinea pig is shown a picture of President John Kennedy. What happens in the brain of the guinea pig at the neuron level?

I want ask some questions that the scientists probably did not ask, or, if they did ask, were probably not able to determine the answers.

When the guinea pig sees the picture of President Kennedy, his optic nerve transmits information from his retina to his brain. Many neurons could receive this information, but only one becomes excited. Only this neuron, out of thousands, seems to be the Kennedy Face neuron. The Kennedy Face neuron, which is near the back of the brain, in turn sends a signal throughout the brain. Although an unknown, perhaps vast, number of neurons are exposed to this signal, only a few, as the Kennedy Face did, become excited. These secondary neuron also send signals, resulting in a few more excited neurons. The guinea pig reports that the picture of Kennedy has caused him to think about Kennedy, when he was born, his inauguration speech, his family, his World War II record, his assassination, and other facts.

The neurons that were excited make up one Kennedy neural network.

This little experiment brings up about a million questions. I won't try to address all of them, but let's wade into a few.

When the optic nerve transmits information from the retina to the brain, how is the information coded? This question comes from my computer and data processing background, one area where I could be called a "real expert", rather than just a well informed layman.

If you need to send information from one computer to another, you often have to compress the information first. A small amount of compressed information can be sent more quickly than a large amount of uncompressed information. In this Internet Age, one computers may be in the United States, the other computer may be in China. The information will probably go on a long trip where it will pass through at least two communication satellites. To save time, the first computer will compress the information before sending it, and the second computer will receive the compressed information and then uncompress it.

When I ask "how is the information coded?", part of what I am asking is "Is the information compressed?".

Imagine that it takes 3 seconds for the uncompressed image of a charging cat to reach the brain, while it only takes a tenth of a second for the compressed image to reach its destination. The question of compressed or not would be very important to a bird.

Some other questions that might be asked are "Why is only the Kennedy Face neuron initially excited?", "How many other neurons are exposed to the optic nerve information and are not excited?", and "Do glial cells have anything to do with this process?".

I don't think we can answer these questions, but thinking about them could lead to valuable speculation, some of which may turn out to be true. Also, as I have said before, many times a non-expert with limited knowledge can ask a question he has no hope of answering, while the true expert, bound by a fixed way of thinking, and obsessed with the minutia of his profession, would never think to ask the question. Once asked, however, the expert would have the knowledge to find answers.

To give a concrete example that applies to this situation. I know what a synapse is. The Wikipedia definition is: In the nervous system, a synapse is a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron. An expert would have much more knowledge about how a synapse operates than I would. It seems possible, or even likely, that the synapse between the optic nerve and the Kennedy Face neuron is acting differently from a synapse between the optic nerve and a neuron that is not being "excited". Maybe the expert can design an experiment that will give us detailed information about processes at the synapse level and we can compare the Kennedy Face neuron synapse and the unexcited neuron synapse.

Let's go back (we really haven't left) to our latest three questions: "Why is only the Kennedy Face neuron initially excited?", "How many other neurons are exposed to the optic nerve information and are not excited?", and "Do glial cells have anything to do with this process?".

It seems to me that the information being transferred must be more than just compressed. It must contain some key that the Kennedy Face neuron could recognize. Unfortunately, if we think about creating this key, we have to ask "How in the world could the optic nerve know it needs to send the information to the Kennedy Face neuron?". Once it knows that, it can create a key that will be part of the information sent. The optic nerve knowing something about the Kennedy Face neuron is not the main story. The optic nerve would have to know something about many neurons. There would be a separate Face neuron other Presidents, a separate neuron for each person the guinea pig knows, even different Thing neurons, one for every thing the guinea pig has seen in his life. If you care about science, the earth shattering implication is the optic nerve may have to know something about thousands of neurons.

How much information (regarding other neurons) would the optic nerve have to keep, maintain, and manipulate? I ask this question because, in computer terms, you have to increase your processing power as you increase the amount of information that must be processed.

I can think of one method the optic nerve might use to send information to the Face and other neurons that might require less (but still some) knowledge of these other neurons. I don't know if this method would require more or less processing power. I am talking about some kind of cryptographic system.

Let me describe a common cryptographic system that our computer networks use, but in terms of the optic nerve and the Face neuron. This cryptographic system uses two keys -- a public key known to all neurons and a private or secret key known only to the Face Neuron. When the Optic Nerve wants to send a secure message to the Face Neuron, the Optic Nerve uses the Face neuron's public key to encrypt the message. The Face Neuron then uses its private key to decryptit the message (and becoming "excited" at the same time).

I am sure I am not the first to think of neurons as small computers and neural networks as groups of neurons working together, just like our computers work together on a computer network. If, however, we want to keep our analogy in data processing terms, we can no longer compare our computers to neurons. If optic nerves, and most likely human neurons, wood bee neurons, and all living cells, have within themselves the processing power to "know something" about four billion other things, you can no longer compare a neuron to a computer. That would be as silly as comparing something with the computing power of the iphone in your pocket to your child's toy abacus.

If I am going to talk gibberish, maybe I should return to the land of gibberish, also known as Quantum Physics, for support.

Quantum Physics has dominated the physical sciences for almost a century. Vast amounts of research dollars have been spent. Somebody, somewhere, must think Quantum Physics is important, valid, and worth the research dollars that have been allocated.

A number of years ago, I read about a "Scanning Tunneling Microscope" (I am sure it cost more than the fifty dollar microscope you could get at a toy store). This microscope was owned by IBM. The word "Tunneling" in the name refers to a quantum concept. From Wikipedia: Quantum Tunneling refers to the quantum mechanical phenomenon where a particle tunnels through a barrier that it classically could not surmount.

The significant part of the microscope was a very sharp electrically charged needle. In fact, it was so sharp that the point of this needle was a single atom. The rest of the microscope was designed to let an IBM researcher move this point (atom) over the surface of a sample. Electrons would flow from the point (atom) to an atom on the surface of the sample. By moving the needle back and forth while monitoring the electron flow, the IBM quantum scientist could show the atoms that made up the surface as they rose and fell, forming peaks and valleys. This was a map of the surface, but the power of the microscope was high enough to see single atoms.

One more thing about this particular Scanning Tunneling Microscope. The IBM quantum scientists figured out how to move some of these surface atoms. Just to impress us, they moved a couple of dozen atoms around and spelled out the word "IBM".

Very, very impressive. Until you realize that Life has been moving around billions and billions of atoms for the last 1500 million years.

If you still doubt the commitment of the scientific world to quantum physics, you only have to consider particle physics and the Large Hadron Collider. The Large Hadron Collider is in Cern, Switzerland. It cost about $13 billion dollars to build and has an annual budget of about one billion dollars. That is a lot of commitment.

Particle Physicists are quantum scientists who study subatomic particles. Tools of their trade include particle accelerators (the Large Hadron Collider is the world's largest particle accelerator), a vivid imagination, and very advanced mathematics. They use the advanced mathematics to prove, or fail to prove, that what they are imagining is a true picture of some aspect of Reality.

Particle Physicists, like little boys, are obsessed with collisions. They use particle accelerators to make subatomic particles collide and then they take a close look at the debris. Like any good quantum scientist, they know (or think they know) that particles (mass) is equivalent to energy. This means that the total energy of the subatomic particles before the collision must equal the total energy of the debris after the collision.

If the Particle Physicist finds that the total energy before the collision is equal to the energy of the debris, he is bored and wants to buy a more powerful particle accelerator. If he finds the energy of the debris is greater than before the collision, the Particle Physicist will probably decide he has made a mistake and try to figure out what it was. On the other hand, if there is not enough energy in the debris, the imaginative quantum scientist will dream up a subatomic particle to account for the missing energy. Postulating specific properties and using his exotic mathematics, new, collision based experiments can be designed to verify the reality of this new particle. Many new subatomic particles have been discovered.

For decades, Particle Physicists have been developing and expanding what is called the Standard Model - a mathematical description of the elementary particles of matter and the electromagnetic, weak, and strong forces by which they interact. The mathematics predicted a particle that would explain why other particles had mass. This was the so-called God Particle and was part of the reason the Large Hadron Collider was built. It has the power to produce collisions where the particle could be detected in the debris. Quantum Physicists claim to have detected the God Particle recently, confirming the validity of the standard model.

Several particles, including the proton (the center of a hydrogen atom) are called Hadrons. In the Large Hadron Collider, protons reach such a high speed that, when two collide, they are broken apart. The debris shows each proton is made of three quarks.

I could go on, but my purpose was to paint a picture of the complexity of this branch of quantum physics. I would like to make several statements that may or may not be true.

Particle Physics has contributed to our understanding of Reality and it has supported real life technological advances.

I believe that the Standard Model, like any mathematical proof, is based on accepted scientific facts, and could be shaken if new insights change these facts.

There are theories about what the internal structure of quarks might be, but to find out you would have to split open a quark. I have heard this would require more energy than there is in the entire universe. Of course, there may be an infinite number of universes. How would you direct this energy at one quark?

So we will never know what is in a quark. But, remember, that is what scientists were saying about stars a few centuries ago.

My biggest problem with particle physics is its most powerful tool, its elegant, advanced mathematical model, that only a few can understand. The reasonable intelligent layman has to take its validity on faith.

I hope I have done an adequate job describing particle physics. It is a challenging subject.

I need to take up a subject that may be even more challenging. This subject, quantum computers, may be important to understanding how we, wood bees, and many other species, think. Before I begin the daunting task of describing a subject that I'm not sure I understand, let me talk about one other subject.

There may not be one branch of science that addresses what I want to discuss, but comparative anatomy includes part of it. From Wikipedia: Comparative anatomy is the study of similarities and differences in the anatomy of different species. It is closely related to evolutionary biology and phylogeny (the evolution of species). This science supports evolution by pointing out similarities in certain structures within members of different species alive today with the same structure in an assumed common ancestor - information about the common ancestor is gathered by a detailed study of fossils.

One common example often given is the bone structure in the upper limb of various vertebrates (animals with backbones) - this structure is very similar in humans, dogs, birds, and whales. I wasn't particularly interested in the evolutionary implications, but more interested in that it showed that when something works, Life will use it (in this case, the bone structure) over and over again. Of course, if it didn't work, evolutionary pressure would wipe it out.

Comparative anatomy also pointed out that sometimes there are no structural similarities between animals that are doing the same thing. For example, a bird, a wood bee, and a bat all fly. The internal structure of their wings, however, are different - especially between the bee and the other two. To me, the important thing to notice is Life has found out that flying is important and will use wings over and over again. I would not want to be so engrossed with the internal structure that made a bird wing work that I missed a different structure that made a wood bee's wing work.

Since I am interested in how we think, whether we are a human or a wood bee, I wanted to find out about glial cells or other cells that are associated in some way with neurons, accepting at least for the foreseeable future the conventional wisdom that thinking and neurons go together.

As I have said before, glial cells support and protect the neurons. I got that off the web somewhere. Wonder what protect means? Are there a bunch of little terrorist cells running around shooting neurons?

Glial cells do a lot, or at least, they seem to in humans. I can't find as much information on glial cells in other species. It is almost as if there is a bias, as if all glial research scientists are humans. Anyway, let us talk a little about human glial cells and hold off thinking about whether the same things happen in other species.

Glial cells help neurons move. An estimated ninety percent of neuron movement is under the guidance of glial cells. Radial glial cells are long and provide the guidance fiber scaffolding that neurons crawl along during the mass migration across the brain during fetal development. Other glial cells guide more limited movements in adults. The question to keep in the back of your mind and perhaps never answer is "Why do neurons need to move?".

Glial cells are also intimately involved with neuron to neuron interactions. They participate in signal transmissions in the nervous system and control the speed of nerve impulses. I read that glial cells assist the neurons in forming synaptic connections, but I wasn't sure if this meant just between nearby neurons or those that are more remote. The "more remote" is of more interest to me. The glial fibers are molecules that could stretch for long distances while under the control of the glial cell which might have as much processing power as neurons seem to have. The glial cell could be monitoring neural activity in neurons that are remote from each other and then strengthening the signal transmissions when it "decides" the activity is related.

I once read that studies of Albert Einstein's brain, after his death, showed the part of his brain we associate with space visualization contained an abnormally large number of neurons with excessive neural connections among these neurons. His brain actually weighed more than was normal.

The saying "Use it or Lose it" seems to apply to the neuron glial cell relationship. I can imagine Einstein as a young boy visualizing space and objects and shapes within this space. On a molecular level, his glial cells notice neurons active in this process and encourage electrochemical signals between these neurons. At the same time, glial cells begin to promote the growth of neural connections that can actually be seen under a normal microscope. As the years go by and Einstein continued to visualize space and objects, glial cells continued to promote the growth of these connections, but they also encouraged the creation of new neurons to help with this important work. The ultimate result is a big, heavy brain and the theory of relativity.

Now let us try to move on to the glial neuron relationship in other species. I want to emphasize the "try to" in the previous statement. My goal is to present my findings in simple, easy to understand statements about this subject. I am going to do this, but I also think it is important to go into some detail about why it is difficult.

I would like to know the ratio of glial cells to neurons in different species. Not surprisingly, since most glial cell scientists are human, much more research has been done on the human brain. This is not to say that a lot of research hasn't been done on other species.

There are 100 billions neurons in the human brain. There are 50 times as many glial cells as neurons (a fifty to one ratio) or about five trillion glial cells. That is a lot. Except new research shows the ratio is really about four to one. Or maybe it is one to one. Ask a simple question. Oh, by the way, the ratio is different in different parts of the brain. So, to me, the simple answer is "we don't know. but there are a lot of human glial cells". By extension, for other species which have been studied less, we also probably don't know the corresponding ratio for their brains.

Another thing we probably need to know is the difference between the Central Nervous System (CNS) and the Peripheral Nervous System (PNS). In humans, the CNS are the nerves in the brain and the spinal column. The PNS are the nerves that leave the brain or the spinal column and travel to certain areas of the body. It is assumed, and I am not disagreeing, that the main purpose of the PNS is to send information gathered by the body's sensory receptors to the CNS as quickly as possible.

Other animals have similar systems and we can assume that they operate in, more or less, the same way as the human CNS and PNS do. But the key words are "more or less".

I used the word "animals" in the previous paragraph intentionally. I could have been more precise by possibly saying "Other Vertebrate", or more likely "Other Mammals" or definitely "Other Primates". The expert in a field, in this case, biological science, wants to label animals to the nth degree. He is obsessed with knowing a cat is not a primate and a monkey is not a chimpanzee.

This verboseness of experts, as well as their tendency to write scholarly papers full of graphs, mathematical formulas, and ten syllable Latin words, make it more difficult to decipher what is being said and put into words a normal person could understand. Having said this, however, although I want the reader to understand what I am writing, my goal is not to write another cute science book for the general public.

One minor goal I do have is to occasionally ask an intelligent question that no one has thought to ask before. My major goal, however, is to understand the nature of thought. I don't care if thought occurs in the Central Nervous System, the Peripheral Nervous System, or a System we have not yet discovered. I don't care if thought occurs in the head of a man or the middle right leg of a wood bee.

My research method is to use a search engine (of course, I mean Google) to read about a subject that I believe can help support points I am trying to make. This activity often exposes me to the writings and theories of the verbose scientists I mentioned above. I then endeavor to put what I have read into real English. Sometimes during this process I will read something that I think really emphasizes a important point I hope to make. But at the time I read this, I am discussing something else. Later when I hope to discuss my important point, I can no longer find the reference on the web. In these cases, I say "I read somewhere".

I read somewhere that the ratio of glial cells to neurons in rats was only 20 percent of what the ratio was in humans. In other words, for every ten human neurons, there would be ten human glial cells; for every ten rat neurons, there would be only two rat glial cells. The human researcher who made this observation had some species biased conclusions.

The researcher concluded that the relatively abundance of glial cells may have had a positive effect on human mental capacity, that a human's computational abilities were much greater than those of a rat. The researcher stated, or at least assumed, that a human was smarter than a rat and this research helped prove it.

The problem is the use of "smarter than". I have read there are seven kinds of intelligence. I suspect there are many more. If we think about it, we quickly realize there are many kinds of intelligence. One person can easily memorize many technical facts and impress his peers by recalling these facts quickly, yet he cannot write a clear concise sentence; Another person can invent Quantum Physics, yet think Hitler is a wonderful person; I can find calculus easy, but statistics hard. You may be able to remember a thousand faces, but only a few names. If smart was defined in concrete, report cards covering six subjects would always contain six A's, or six F's.

How would a rat researcher define "Smart"? He might be proud to say that his subject, Mr. Rat, could identify over five hundred rat faces based mainly on the kinks, colors, and directions of various rat whiskers. Mr. Rat, who is well traveled, might also be able to identify the more than 1700 specialty cheeses found worldwide, just by a quick sniff. Smart is in the Eye of the Beholder.

I have a theory. I have a theory that I hope can be supported by observations of nature, the nature we see around us. Speaking of observations, this theory is based on two observations. First, Man discovered quantum physics about ninety years ago. We can say that Man has had about a century to figure out quantum physics, to know enough to be able to build practical devices like computers. Second, Life has existed on Earth for 1500 million years. This is 15 million times as long as Man has known about quantum physics.

I have a theory. I have a theory that Life discovered quantum physics long before Man existed. Life discovered that quantum physics could help its creatures survive, that quantum physics made thinking and being self aware possible, and creatures with this trait had a better chance of survival than whatever creatures had come before.

Now is a good time to discuss what we know about quantum computing and how close we are to creating a useful, working quantum computer.

The computers we have today (actually, any device containing a computer chip) are almost exclusively dependent on silicon. This silicon has been grown into a crystal structure. A crystal structure is a unique arrangement of atoms in a crystal. A crystal structure is composed of a group of atoms arranged in a particular way. In the case of silicon destined to be in a computer chip, the group of atoms is periodically repeated in three dimensions on a lattice. At this point, the lattice can be made up of billions of groups. The lattice is sliced into discs called wafers. Intel's web site explains how this process continues: Chips are built simultaneously in a grid formation on the wafer surface - A chip is a complex device that forms the brains of every computing device. While chips look flat, they are three-dimensional structures and may include as many as 30 layers of complex circuitry.

Each of the billions of groups (we can call them "bits") can have a value of "1" (on) or "0" (off) and the values of all the bits can be controlled by the complex circuitry. Modern computers can process (use in calculation or move from one location to another) billions of bits almost instantly. The key word is "almost". When we start to need to process huge numbers of bits, like maybe a million billion bits, we start to need a quantum computer.

With modern computers, we talk about bits. With quantum computers, we talk about qubits. A qubit can have a value of "1" (on), "0" (off), or a third value which may be viewed as a mixture of the first two. I will call this value "maybe".

I think and hope I can explain quantum computers without fully understanding what a qubit is. If I had access to a friendly quantum computer expert, he might help me gain a better understanding. Lacking this access, let me describe why I am confused.

My confusion is rooted in two things I have discussed before and which I will briefly summarize below, and a quantum concept called superposition.

The University of Waterloo is a highly ranked Canadian University. From Its Website: Superposition is essentially the ability of a quantum system to be in multiple states at the same time — that is, something can be “here” and “there,” or “up” and “down” at the same time.

One of the things I'd like to summarize quickly is what started quantum physics in the first place - no one could determine exactly where an individual electron would be. If you fired an electron at a metal pole, the electron might hit the pole, missed the pole, or glance off the pole at different angles. You could only give probabilities of where an individual electron would go.

The other thing I'd like to summarize quickly is the famous (to quantum physicists and lay people with an interest in science) Schrodinger's cat experiment. Schrodinger's cat was an imaginary cat that Schrodinger put into a box. Then a random, unpredictable event either killed the cat or left left it alive. The only way to know if the cat is alive or dead is to open the box. Quantum Physics says that, before the box is opened, the cat is in some kind of limbo, maybe both alive and dead, or half alive and half dead, maybe in the "maybe" state I mentioned above.

I also need to introduce one more quantum concept that applies to quantum computers, decoherence. A qubit, unlike a digit, is a quantum concept. Complex electronic circuitry that helps define modern computer chips can easily determine the value of a digit, whether it has a value of "1" (on) or "0" (off). A qubit, on the other hand, is very, very shy. If you even look at a qubit (for example, with electronic circuitry), it will lose its quantumness. When a qubit loses or falls out of its quantum state, quantum scientists say it has decohered. Quantum scientists do not want qubits to undergo decoherence. They seek to make practical use of qubits without looking at them.

Now let me explain my confusion about qubits. Suppose we have four different kinds of digits (where we are ignoring that the "di" part of the word means two). We have digit-1, digit-2, digit-3, and digit-4.

Digit-1 has one value, either a “1”, a “0”, or a “maybe”. If the value of digit-1 is “1” or “0”, modern computers are not possible. Every calculation gives the same answer , “1” or “0”: 1 + 1 = 1, 1 – 1 = 1, 0 + 0 = 0, 0 – 0 = 0. If the value is “maybe”, the circuitry of the modern computer will make each digit-1 decohere into either a “1” or “0”. In this case, digit-1 is just like digit-2.

Digit-3 is what is being described as a three valued kind of digit which is the definition of the qubit, but it is again just digit-1 with one value of either “1”, “0” or “maybe”. As with digit-1, the quantum scientists have endeavored against great odds to not scare the shy “maybe” less it undergoes the dreaded decoherence. In this case, I have to wonder, although we can never look to see, what this “maybe” becomes. Is it, as in the Schrodinger live cat, dead cat dicotomy, a “1” or a “0”, and does it matter?

Consider now digit-4. It is again digit-1, but this time we are using another “maybe”, a “maybe-2”. Maybe-2 would be from the world of electron experiments where the shy maybe-2 becomes one of an infinite number of values between “1” and “0”, for example “.0587”.

I suspect the quantum scientists are saying that “maybe” and “maybe-2” become the same thing when enough qubits are processed. Nevertheless, I would like to ask the question: If you designed a quantum computer based on a qubit with two values, a “maybe” and a “maybe-2”, would you have a better quantum computer, a super quantum computer?

Before continuing this line of thought (Did you really think I was finished?), we should discuss another quantum concept. This was another quantum idea that really made Albert Einstein mad. Einstein's theories and thus his reputation was built on the idea that the speed of light was constant and nothing could go faster than the speed of light. Einstein believed that any scientist that suggested that something could go faster than light speed was not a scientist, he was an idiot.

So Albert Einstein was mad when a quantum idiot introduced the concept of Entanglement. From the University of Waterloo's website: Entanglement is an extremely strong correlation that exists between quantum particles — so strong, in fact, that two or more quantum particles can be inextricably linked in perfect unison, even if separated by great distances. The particles remain perfectly correlated even if separated by great distances. The particles are so intrinsically connected, they can be said to “dance” in instantaneous, perfect unison, even when placed at opposite ends of the universe. This seemingly impossible connection inspired Einstein to describe entanglement as “spooky action at a distance.”

My next task is to tie these quantum concepts and my thoughts to current work on achieving functional and useful quantum computers. Almost everyone agrees that it would be good to have faster computers. It is not as easy as you would think to describe how research in this area stands, especially without getting into mind numbing math related to exotic systems (what kind of systems? - varied and complex).

One of my first steps was to read an article published last year by Wired Magazine entitled "The Revolutionary Quantum Computer That May Not Be Quantum at All". This article ties a small Canadian Company, D-Wave Systems, Inc., to Google, IBM, and Conagra Foods, three very large companies. D-Wave claims to have developed a computer chip containing 512 qubits. This chip is the heart of D-Wave's Quantum Computer. As the title of the Wired's article implies, there is controversy over whether or not this is really a quantum computer, but Google was impressed enough to buy one, probably paying D-Wave about ten million dollars.

To test the power of the D-Wave computer and perhaps determine if it was a quantum computer or just a regular computer, it was given a problem that normal computers regularly solve. IBM has a program called CPLEX. ConAgra uses this program to crunch global market and weather data to find the optimum price at which to sell flour. CPLEX running on a normal Intel Chip based computer took 30 minutes to find the answer. The D-Wave found the answer in less than a second. It was 3,600 times as fast.

The D-Wave computer is a special purpose computer. It has to be programmed to accomplish a limited number of tasks. A cell phone is a general purpose computer - it can take pictures, send emails, text, play games. We need, but are a long way from having, a general purpose quantum computer.

Reliable quantum computers would revolutionize research in many fields and lead to numerous technological advances. I can mention some, without mathematical references, or even long winded definitions. Quantum computers could break the security keys that keep our financial transactions safe as we buy and sell on the Internet. To make up for this, quantum computers could develop new security keys that could never be broken. Quantum computers could process possible reactions between molecules and atoms, forecasting new and better superconducting materials that operate at room temperature - short winded definition: superconducting materials can conduct electricity over a long distance with no resistance and thus no loss of power. Calculations performed on a quantum computer can theoretically take seconds to complete, while the same calculations performed on a silicon based processor could take years.

To understand our current status regarding quantum computing, I want to review in more detail what quantum scientists, in general, are doing, and, in particular, the progress and setbacks of the D-Wave Scientists since the company began early this century.

The current, primary goal of quantum scientists is to develop reliable qubits. To quote again from the IQC Website"... we need qubits that behave the way we want them to. These qubits could be made of photons, atoms, electrons, molecules or perhaps something else. Scientists at IQC are researching a large array of them as potential bases for quantum computers. But qubits are notoriously tricky to manipulate, since any disturbance causes them to fall out of their quantum state (or “decohere”). Decoherence is the Achilles heel of quantum computing, but it is not insurmountable. The field of quantum error correction examines how to stave off decoherence and combat other errors. Every day, researchers at IQC and around the world are discovering new ways to make qubits cooperate.".

The IQC Website also reports that a joint effort between their researchers and scientist from MIT have produced a (computer) experiment where twelve qubits are used together. This is a world record - at least, according to IQC. I will call this the "mainstream science view". D-Wave Systems claims that the computers they are selling each have a chip that contains 512 qubits. Their newest chip has more. Obviously, D-Wave scientists are not mainstream.

The D-Wave Computer is a ten foot high black box designed to keep the tiny quantum chip within very, very cold. In fact, the temperature is close to absolute zero. The reason cold is needed is that almost anything (heat, vibration, electromagnetic noise, a stray atom) can cause a quantum system to undergo decoherence (a bad thing). The heart of the chip is a group of 512 niobium loops which when chilled sufficiently exhibit quantum-mechanical behavior, becoming, in effect, qubits.

Niobium is a soft, grey, ductile metal. The niobium loops are small by our standard, less than a millimeter and barely visible. They are huge, however, relative to computer components. D-Wave selected them for its computer, in part, because they could easily be mass produced by a regular microchip fabrication laboratory.

When a niobium loop is cooled to almost absolute zero, two magnetic fields form that ran around the loop in two opposite directions at the same time. In physics, electricity and magnetism are the same thing - the fields can be interpreted as electrons in a superposition state (a quantum concept). If the loop exhibits other quantum properties, it can serve as a qubit. The qubits on the chip can, D-Wave hoped, be connected by quantum tunneling and entanglement. The wires needed to connect the components on the chip and the optical fiber cable that transmitted information to the outside world were engineered to stave off decoherence.

When the D-Wave computer was being designed, most "mainstream" scientists were pursuing a "gate model" quantum architecture where qubits were placed on a chip to form standard logic gates like on regular computer circuits -the ands, ors, nots, and so on that assemble into how a computer thinks. The D-Wave Scientists decided to pursue a different architecture - one they felt would lead to a more robust chip, that is, less subject to decoherence. It would still allow their computer to solve optimization problems which were very important, but they would not have a general purpose quantum computer.

The question was "Did they have a quantum computer at all?". And so the fun began.

In 2010, D-Wave landed its first customer, Lockheed Martin. In 2013, Google and NASA were potential new customers of D-Wave. NASA wanted to determine the best route for its Mars Rover to follow as it explored Mars - a classic, difficult optimization problem that could be handled by D-Wave's computer. Before they would buy, however, Google and NASA demanded benchmark tests.

A benchmark test, in this case, is basically running a computer program on one computer - a standard desktop silicon based computer, and then running the same program again on another computer - the D-Wave Quantum Computer. If the D-Wave Quantum Computer ran the program faster and finished much more quickly than the standard computer, it was most likely really a quantum computer. If, on the other hand, for whatever reason, its niobium qubits had suffered decoherence, D-Wave's computer would have become a very expensive, standard desktop computer. Its performance in a benchmark test would not be substantially different from the performance of the silicon based computer.

Three benchmark tests were performed. One, mentioned above, used the IBM CPLEX program. D-Wave passed with flying colors. For the other two benchmark tests, the results, unfortunately, were less clear cut - but Google and NASA bought D-Wave's machine.

Mainstream Quantum Scientists with their 12 qubits experiment had been skeptical of the D-Wave 512 qubit claim. With the Lockheed Martin purchase in 2011, the mainstreamers could, for the first time, run unbiased benchmark tests. Most of these tests showed that standard, non-quantum, special purpose (not general purpose) computers would perform just as well as the D-Wave. It is worth noting that some of these results were available when, in 2013, the Google - NASA purchase was made.

D-Wave responded that the benchmark tests were really not unbiased. They had reasons why the tests were unfair. In addition, the mainstreamers had been very thorough, running many tests. On some of these, the D-Wave computer had performed well. D-Wave scientists wanted to know why - did these particular tests show less decoherence. Could these results help them make their quantum computer better.

D-Wave still has the support of Google and NASA. They now have a D-Wave 2X Computer which sports 1,097 Qubits. From a Tech Times article published in December 2015: "Google Director of Engineering Hartmut Neven said that what the D-Wave 2X can process in a span of a second is something that a single-core classical computer can solve in a span of 10,000 years.". The mainstreamers immediately, enthusiastically, and vehemently attacked the claim.

This war about the number, kind, and uses of qubits is very informative. When you look at the big picture, however, you may feel, as I do, that we have only scratched the surface. We need to know a lot, lot more before we can hope for major advances in the field of quantum computing.

For 1500 million years, Life has been moving molecules, atoms, and maybe even sub-atomic particles around, with the, perhaps blind, goal of helping each individual creature survive and reproduce.

When we look at particle physics, we see an IBM researcher move twenty atoms to spell "IBM", while within his body, Life moves billions of atoms so that the researcher can stand there, look at his work, and think about what he has done.

When we look at quantum computing, we see scientists battling each other over how many qubits they have created, while having no real understanding of even what a qubit is. They seem a long way from understanding their mysteries and being able to build practical devices that work in the "real" world. Maybe if quantum computer experts would turn the limited, working tools they have on living cells, they could learn enough to make real progress.

The first bee took to the air 130 million years ago. The first tennis racket was produced 142 years ago. Even today, you do not see a tennis racket on every street corner. It is a safe bet that the wood bees on my son's patio had never seen a man carrying a tennis racket. Instinct, whatever that is, could not have told the bees to beware. Only the death of a wood bee made the survivors realize: man plus tennis racket equals danger.

Ancient wood bees liked to tunnel into wood. They had the teeth, or whatever, for it. Each bee was proud of the perfect round entrance and knew his particular tunnel would be the best home for his future offspring.

A wood bee flying around a young Abe Lincoln's Log Cabin a couple of centuries ago would have felt the same. The neurons in his wood bee brain had changed little from those that lived in his ancient ancestor. When he saw a gangly young man walking around, he felt anger. He could imagine the giant monster stepping on him or eating his mate. Wood bees can't sell. They can't speak English. The only way to convince the monster to leave was buzz angrily around his head. And it usually worked.

When my son served a wood bee into the backyard, the surviving wood bees anger probably increased, but a new emotion, fear, instantly appeared. To any intelligent creature, the solution was obvious - fly close to the beams, don't give the monster a clear shot.

More THINKING LONGER AND HARDER. - Mike Stewart. - mike@esearchfor.com

WHY WOOD BEES CAN'T SELL SQUAT - THINKING LONGER AND HARDER

 

 
 

(Please Forward to anyone interested in a different view of Sales) - Copyright © 2016 by Mike Stewart

I was sitting on my son's patio one warm spring afternoon. The floor of the patio is concrete. The sides are open to the outdoors. Wooden columns support wooden beams that make up the ceiling. The ceiling is designed to protect you from the sun, but there is space between the beams open to the sky. All the wood, whether column or beam, is unpainted and weathered grey. This wood was important to our local neighborhood wood bees.

Wood bees, also known as carpenter bees, are common in our area. Many people consider them pests, to be poisoned or killed in other ways for a couple of reasons.

First, the wood bee looks like a bumble bee, which can sting. The wood bee, especially the male wood bee, is very aggressive. Although he cannot sting, no human likes to have an angry bee buzzing around his head.

Second, wood bees damage wood, especially unpainted patio wood that is weathered grey. When my son, Michael, bought his house, and with it, his patio, he and wood bees became mortal enemies.

The female wood bee finds and expands holes in wood. Or she chews new tunnels. She then collects pollen, in the usual bee way, puts it in the tunnel, and lays her eggs. The eggs hatch, the young bees (larvae) eat the pollen, eventually becoming new, adult wood bees. Over time, enough tunnels can weaken beams of patio wood to the point of collapse.

That warm spring afternoon twenty or thirty wood bees were patrolling the patio. They were behaving exactly like an emptimologist (hereafter, bug scientist) would predict. I knew some of the ways my son had tried to discourage the bees - spraying wood with poison, painting over the tunnels.

But suddenly, I knew Michael was going to try a new tactic. He came onto the patio carrying a tennis racket. He promptly served one of the bees well into the backyard. Humans 15, Bees Love.

I was impressed, but I don't think the surviving bees felt the same emotion. Also, I don't think any Bug Scientist has ever predicted how they would react. The Wood Bees seemed to quickly come to the conclusion that "discretion is the better part of valor".

After the tragic death of one of their members, the wood bees kept close watch on Michael. Whenever he approached, they would fly close to the wooden columns or up, between the wooden beams. He couldn't get a clear swing. I believe, at the end of the day, the score was still Humans 15, Bees Love.

A Bug Scientist might want to investigate the persistence of this phenomenon. He might want to know if the tennis racket would be remembered tomorrow, or next month, or next year. Yes, I know, experiments have shown this kind of experience is not inherited across generations. But quantum physics indicates to me that it never hurts to ask.

I, however, want to pursue a different question: What goes on in the mind of a wood bee?

Before we can explore the mind of a wood bee, we must first admit to the possibility that a wood bee has a mind. You may find it possible to believe I have a unique worldview and you have a different, unique worldview. It is, however, probably more difficult to believe both worldviews are equally valid. Or invalid. But it requires a quantum leap to accept that a wood bee has a worldview, and this worldview is just as valid, or invalid, as yours or mine. If quantum physics hadn't encouraged me to see the philosophical nature of science, I don't think I would have seen the evidence that is everywhere around us.

The bee brain is made up of about one million neurons. The human brain is made up of about 100 billion neurons (about 100,000 times as many as are in the bee brain) and many more neuroglia (or glial cells) which serve to support and protect the neurons.

When we have a group of neurons as in the human brain, the subject of neural networks arises. We need to consider the many different combinations of "excited" neurons that are possible, whether these neurons are in bee or man. I would like to have this discussion, in part, because of the current belief that our memories, thoughts, and even our being self aware, is a product of neural networks. I am going to use the term "neural networks", but what I say will also apply if it is later discovered that glial cells play a greater part than we now think. I don't know if bee brains contain something like glial cells (bug scientists may know), but it doesn't really matter. In fact, what I say would apply if it is discovered (by whatever scientist that studies really, really small things) that large groups of similar molecules within a cell are behaving in a neural network like manner.

Such a discovery might support a theory that individual cells could be self aware.

Let me clarify the last two sentences. As I have previously said, cells like neurons are really HUGE. I gave the example, which I read somewhere, that if the diameter of a human hair was expanded to the height of the Empire State Building, a DNA molecule (which is made up of a large number of atoms and is usually found within the nucleus of a cell) would be the size of the toenail of a small dog sitting in the lobby. The "similar molecules within a cell" that I am talking about above are much smaller, made up of a few dozen or a few hundred atoms. The scientist mentioned would have to go into the lobby, pet the small dog on the head, sit down at a table and peer through his microscope to see them.

When I talk about neural networks, I want to talk about very large numbers with minimum use of numbers. This isn't as strange as it seems. I don't think any of us can really conceive of what 100 billion neurons means except that it is lots and lots and lots and lots of neurons. On the other hand, the one million neurons that make up the bee brain is just lots and lots of neurons.

Most experts on the structure of human neurons are not experts on neural networks. This is slowly changing.

Technology has now advanced to the point where a scientist can tell when an individual neuron is "excited". I want to relate an experiment I read about, but first I want to define "a scientist" and "excited".

In the case of this experiment, "a scientist" means a group of scientists standing around a patient or volunteer (for convenience, I'll call him the guinea pig). These scientists have access to some expensive equipment and have either attached electrodes to the scalp of the guinea pig or inserted probes into his brain. I don't know if these scientists are experts in a soft science like psychology or experts in biology. I don't know how much they know about human neurons or neural networks. I am pretty sure none are experts on bee neurons.

A neuron is "excited" when it receives input from another neuron and in turn transmits a signal. This signal is electrochemical in nature and is very weak. Electrochemical means the signal takes place in the neuron (actually in a part of the neuron called the axon) and is based on chemical reactions that results in a very weak electrical current. Our equipment is now capable of recording the strength of this signal.

Now to the experiment: The guinea pig is shown a picture of President John Kennedy. What happens in the brain of the guinea pig at the neuron level?

I want ask some questions that the scientists probably did not ask, or, if they did ask, were probably not able to determine the answers.

When the guinea pig sees the picture of President Kennedy, his optic nerve transmits information from his retina to his brain. Many neurons could receive this information, but only one becomes excited. Only this neuron, out of thousands, seems to be the Kennedy Face neuron. The Kennedy Face neuron, which is near the back of the brain, in turn sends a signal throughout the brain. Although an unknown, perhaps vast, number of neurons are exposed to this signal, only a few, as the Kennedy Face did, become excited. These secondary neuron also send signals, resulting in a few more excited neurons. The guinea pig reports that the picture of Kennedy has caused him to think about Kennedy, when he was born, his inauguration speech, his family, his World War II record, his assassination, and other facts.

The neurons that were excited make up one Kennedy neural network.

This little experiment brings up about a million questions. I won't try to address all of them, but let's wade into a few.

When the optic nerve transmits information from the retina to the brain, how is the information coded? This question comes from my computer and data processing background, one area where I could be called a "real expert", rather than just a well informed layman.

If you need to send information from one computer to another, you often have to compress the information first. A small amount of compressed information can be sent more quickly than a large amount of uncompressed information. In this Internet Age, one computers may be in the United States, the other computer may be in China. The information will probably go on a long trip where it will pass through at least two communication satellites. To save time, the first computer will compress the information before sending it, and the second computer will receive the compressed information and then uncompress it.

When I ask "how is the information coded?", part of what I am asking is "Is the information compressed?".

Imagine that it takes 3 seconds for the uncompressed image of a charging cat to reach the brain, while it only takes a tenth of a second for the compressed image to reach its destination. The question of compressed or not would be very important to a bird.

Some other questions that might be asked are "Why is only the Kennedy Face neuron initially excited?", "How many other neurons are exposed to the optic nerve information and are not excited?", and "Do glial cells have anything to do with this process?".

I don't think we can answer these questions, but thinking about them could lead to valuable speculation, some of which may turn out to be true. Also, as I have said before, many times a non-expert with limited knowledge can ask a question he has no hope of answering, while the true expert, bound by a fixed way of thinking, and obsessed with the minutia of his profession, would never think to ask the question. Once asked, however, the expert would have the knowledge to find answers.

To give a concrete example that applies to this situation. I know what a synapse is. The Wikipedia definition is: In the nervous system, a synapse is a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron. An expert would have much more knowledge about how a synapse operates than I would. It seems possible, or even likely, that the synapse between the optic nerve and the Kennedy Face neuron is acting differently from a synapse between the optic nerve and a neuron that is not being "excited". Maybe the expert can design an experiment that will give us detailed information about processes at the synapse level and we can compare the Kennedy Face neuron synapse and the unexcited neuron synapse.

Let's go back (we really haven't left) to our latest three questions: "Why is only the Kennedy Face neuron initially excited?", "How many other neurons are exposed to the optic nerve information and are not excited?", and "Do glial cells have anything to do with this process?".

It seems to me that the information being transferred must be more than just compressed. It must contain some key that the Kennedy Face neuron could recognize. Unfortunately, if we think about creating this key, we have to ask "How in the world could the optic nerve know it needs to send the information to the Kennedy Face neuron?". Once it knows that, it can create a key that will be part of the information sent. The optic nerve knowing something about the Kennedy Face neuron is not the main story. The optic nerve would have to know something about many neurons. There would be a separate Face neuron other Presidents, a separate neuron for each person the guinea pig knows, even different Thing neurons, one for every thing the guinea pig has seen in his life. If you care about science, the earth shattering implication is the optic nerve may have to know something about thousands of neurons.

How much information (regarding other neurons) would the optic nerve have to keep, maintain, and manipulate? I ask this question because, in computer terms, you have to increase your processing power as you increase the amount of information that must be processed.

I can think of one method the optic nerve might use to send information to the Face and other neurons that might require less (but still some) knowledge of these other neurons. I don't know if this method would require more or less processing power. I am talking about some kind of cryptographic system.

Let me describe a common cryptographic system that our computer networks use, but in terms of the optic nerve and the Face neuron. This cryptographic system uses two keys -- a public key known to all neurons and a private or secret key known only to the Face Neuron. When the Optic Nerve wants to send a secure message to the Face Neuron, the Optic Nerve uses the Face neuron's public key to encrypt the message. The Face Neuron then uses its private key to decryptit the message (and becoming "excited" at the same time).

I am sure I am not the first to think of neurons as small computers and neural networks as groups of neurons working together, just like our computers work together on a computer network. If, however, we want to keep our analogy in data processing terms, we can no longer compare our computers to neurons. If optic nerves, and most likely human neurons, wood bee neurons, and all living cells, have within themselves the processing power to "know something" about four billion other things, you can no longer compare a neuron to a computer. That would be as silly as comparing something with the computing power of the iphone in your pocket to your child's toy abacus.

If I am going to talk gibberish, maybe I should return to the land of gibberish, also known as Quantum Physics, for support.

Quantum Physics has dominated the physical sciences for almost a century. Vast amounts of research dollars have been spent. Somebody, somewhere, must think Quantum Physics is important, valid, and worth the research dollars that have been allocated.

A number of years ago, I read about a "Scanning Tunneling Microscope" (I am sure it cost more than the fifty dollar microscope you could get at a toy store). This microscope was owned by IBM. The word "Tunneling" in the name refers to a quantum concept. From Wikipedia: Quantum Tunneling refers to the quantum mechanical phenomenon where a particle tunnels through a barrier that it classically could not surmount.

The significant part of the microscope was a very sharp electrically charged needle. In fact, it was so sharp that the point of this needle was a single atom. The rest of the microscope was designed to let an IBM researcher move this point (atom) over the surface of a sample. Electrons would flow from the point (atom) to an atom on the surface of the sample. By moving the needle back and forth while monitoring the electron flow, the IBM quantum scientist could show the atoms that made up the surface as they rose and fell, forming peaks and valleys. This was a map of the surface, but the power of the microscope was high enough to see single atoms.

One more thing about this particular Scanning Tunneling Microscope. The IBM quantum scientists figured out how to move some of these surface atoms. Just to impress us, they moved a couple of dozen atoms around and spelled out the word "IBM".

Very, very impressive. Until you realize that Life has been moving around billions and billions of atoms for the last 1500 million years.

If you still doubt the commitment of the scientific world to quantum physics, you only have to consider particle physics and the Large Hadron Collider. The Large Hadron Collider is in Cern, Switzerland. It cost about $13 billion dollars to build and has an annual budget of about one billion dollars. That is a lot of commitment.

Particle Physicists are quantum scientists who study subatomic particles. Tools of their trade include particle accelerators (the Large Hadron Collider is the world's largest particle accelerator), a vivid imagination, and very advanced mathematics. They use the advanced mathematics to prove, or fail to prove, that what they are imagining is a true picture of some aspect of Reality.

Particle Physicists, like little boys, are obsessed with collisions. They use particle accelerators to make subatomic particles collide and then they take a close look at the debris. Like any good quantum scientist, they know (or think they know) that particles (mass) is equivalent to energy. This means that the total energy of the subatomic particles before the collision must equal the total energy of the debris after the collision.

If the Particle Physicist finds that the total energy before the collision is equal to the energy of the debris, he is bored and wants to buy a more powerful particle accelerator. If he finds the energy of the debris is greater than before the collision, the Particle Physicist will probably decide he has made a mistake and try to figure out what it was. On the other hand, if there is not enough energy in the debris, the imaginative quantum scientist will dream up a subatomic particle to account for the missing energy. Postulating specific properties and using his exotic mathematics, new, collision based experiments can be designed to verify the reality of this new particle. Many new subatomic particles have been discovered.

For decades, Particle Physicists have been developing and expanding what is called the Standard Model - a mathematical description of the elementary particles of matter and the electromagnetic, weak, and strong forces by which they interact. The mathematics predicted a particle that would explain why other particles had mass. This was the so-called God Particle and was part of the reason the Large Hadron Collider was built. It has the power to produce collisions where the particle could be detected in the debris. Quantum Physicists claim to have detected the God Particle recently, confirming the validity of the standard model.

Several particles, including the proton (the center of a hydrogen atom) are called Hadrons. In the Large Hadron Collider, protons reach such a high speed that, when two collide, they are broken apart. The debris shows each proton is made of three quarks.

I could go on, but my purpose was to paint a picture of the complexity of this branch of quantum physics. I would like to make several statements that may or may not be true.

Particle Physics has contributed to our understanding of Reality and it has supported real life technological advances.

I believe that the Standard Model, like any mathematical proof, is based on accepted scientific facts, and could be shaken if new insights change these facts.

There are theories about what the internal structure of quarks might be, but to find out you would have to split open a quark. I have heard this would require more energy than there is in the entire universe. Of course, there may be an infinite number of universes. How would you direct this energy at one quark?

So we will never know what is in a quark. But, remember, that is what scientists were saying about stars a few centuries ago.

My biggest problem with particle physics is its most powerful tool, its elegant, advanced mathematical model, that only a few can understand. The reasonable intelligent layman has to take its validity on faith.

I hope I have done an adequate job describing particle physics. It is a challenging subject.

I need to take up a subject that may be even more challenging. This subject, quantum computers, may be important to understanding how we, wood bees, and many other species, think. Before I begin the daunting task of describing a subject that I'm not sure I understand, let me talk about one other subject.

There may not be one branch of science that addresses what I want to discuss, but comparative anatomy includes part of it. From Wikipedia: Comparative anatomy is the study of similarities and differences in the anatomy of different species. It is closely related to evolutionary biology and phylogeny (the evolution of species). This science supports evolution by pointing out similarities in certain structures within members of different species alive today with the same structure in an assumed common ancestor - information about the common ancestor is gathered by a detailed study of fossils.

One common example often given is the bone structure in the upper limb of various vertebrates (animals with backbones) - this structure is very similar in humans, dogs, birds, and whales. I wasn't particularly interested in the evolutionary implications, but more interested in that it showed that when something works, Life will use it (in this case, the bone structure) over and over again. Of course, if it didn't work, evolutionary pressure would wipe it out.

Comparative anatomy also pointed out that sometimes there are no structural similarities between animals that are doing the same thing. For example, a bird, a wood bee, and a bat all fly. The internal structure of their wings, however, are different - especially between the bee and the other two. To me, the important thing to notice is Life has found out that flying is important and will use wings over and over again. I would not want to be so engrossed with the internal structure that made a bird wing work that I missed a different structure that made a wood bee's wing work.

Since I am interested in how we think, whether we are a human or a wood bee, I wanted to find out about glial cells or other cells that are associated in some way with neurons, accepting at least for the foreseeable future the conventional wisdom that thinking and neurons go together.

As I have said before, glial cells support and protect the neurons. I got that off the web somewhere. Wonder what protect means? Are there a bunch of little terrorist cells running around shooting neurons?

Glial cells do a lot, or at least, they seem to in humans. I can't find as much information on glial cells in other species. It is almost as if there is a bias, as if all glial research scientists are humans. Anyway, let us talk a little about human glial cells and hold off thinking about whether the same things happen in other species.

Glial cells help neurons move. An estimated ninety percent of neuron movement is under the guidance of glial cells. Radial glial cells are long and provide the guidance fiber scaffolding that neurons crawl along during the mass migration across the brain during fetal development. Other glial cells guide more limited movements in adults. The question to keep in the back of your mind and perhaps never answer is "Why do neurons need to move?".

Glial cells are also intimately involved with neuron to neuron interactions. They participate in signal transmissions in the nervous system and control the speed of nerve impulses. I read that glial cells assist the neurons in forming synaptic connections, but I wasn't sure if this meant just between nearby neurons or those that are more remote. The "more remote" is of more interest to me. The glial fibers are molecules that could stretch for long distances while under the control of the glial cell which might have as much processing power as neurons seem to have. The glial cell could be monitoring neural activity in neurons that are remote from each other and then strengthening the signal transmissions when it "decides" the activity is related.

I once read that studies of Albert Einstein's brain, after his death, showed the part of his brain we associate with space visualization contained an abnormally large number of neurons with excessive neural connections among these neurons. His brain actually weighed more than was normal.

The saying "Use it or Lose it" seems to apply to the neuron glial cell relationship. I can imagine Einstein as a young boy visualizing space and objects and shapes within this space. On a molecular level, his glial cells notice neurons active in this process and encourage electrochemical signals between these neurons. At the same time, glial cells begin to promote the growth of neural connections that can actually be seen under a normal microscope. As the years go by and Einstein continued to visualize space and objects, glial cells continued to promote the growth of these connections, but they also encouraged the creation of new neurons to help with this important work. The ultimate result is a big, heavy brain and the theory of relativity.

Now let us try to move on to the glial neuron relationship in other species. I want to emphasize the "try to" in the previous statement. My goal is to present my findings in simple, easy to understand statements about this subject. I am going to do this, but I also think it is important to go into some detail about why it is difficult.

I would like to know the ratio of glial cells to neurons in different species. Not surprisingly, since most glial cell scientists are human, much more research has been done on the human brain. This is not to say that a lot of research hasn't been done on other species.

There are 100 billions neurons in the human brain. There are 50 times as many glial cells as neurons (a fifty to one ratio) or about five trillion glial cells. That is a lot. Except new research shows the ratio is really about four to one. Or maybe it is one to one. Ask a simple question. Oh, by the way, the ratio is different in different parts of the brain. So, to me, the simple answer is "we don't know. but there are a lot of human glial cells". By extension, for other species which have been studied less, we also probably don't know the corresponding ratio for their brains.

Another thing we probably need to know is the difference between the Central Nervous System (CNS) and the Peripheral Nervous System (PNS). In humans, the CNS are the nerves in the brain and the spinal column. The PNS are the nerves that leave the brain or the spinal column and travel to certain areas of the body. It is assumed, and I am not disagreeing, that the main purpose of the PNS is to send information gathered by the body's sensory receptors to the CNS as quickly as possible.

Other animals have similar systems and we can assume that they operate in, more or less, the same way as the human CNS and PNS do. But the key words are "more or less".

I used the word "animals" in the previous paragraph intentionally. I could have been more precise by possibly saying "Other Vertebrate", or more likely "Other Mammals" or definitely "Other Primates". The expert in a field, in this case, biological science, wants to label animals to the nth degree. He is obsessed with knowing a cat is not a primate and a monkey is not a chimpanzee.

This verboseness of experts, as well as their tendency to write scholarly papers full of graphs, mathematical formulas, and ten syllable Latin words, make it more difficult to decipher what is being said and put into words a normal person could understand. Having said this, however, although I want the reader to understand what I am writing, my goal is not to write another cute science book for the general public.

One minor goal I do have is to occasionally ask an intelligent question that no one has thought to ask before. My major goal, however, is to understand the nature of thought. I don't care if thought occurs in the Central Nervous System, the Peripheral Nervous System, or a System we have not yet discovered. I don't care if thought occurs in the head of a man or the middle right leg of a wood bee.

My research method is to use a search engine (of course, I mean Google) to read about a subject that I believe can help support points I am trying to make. This activity often exposes me to the writings and theories of the verbose scientists I mentioned above. I then endeavor to put what I have read into real English. Sometimes during this process I will read something that I think really emphasizes a important point I hope to make. But at the time I read this, I am discussing something else. Later when I hope to discuss my important point, I can no longer find the reference on the web. In these cases, I say "I read somewhere".

I read somewhere that the ratio of glial cells to neurons in rats was only 20 percent of what the ratio was in humans. In other words, for every ten human neurons, there would be ten human glial cells; for every ten rat neurons, there would be only two rat glial cells. The human researcher who made this observation had some species biased conclusions.

The researcher concluded that the relatively abundance of glial cells may have had a positive effect on human mental capacity, that a human's computational abilities were much greater than those of a rat. The researcher stated, or at least assumed, that a human was smarter than a rat and this research helped prove it.

The problem is the use of "smarter than". I have read there are seven kinds of intelligence. I suspect there are many more. If we think about it, we quickly realize there are many kinds of intelligence. One person can easily memorize many technical facts and impress his peers by recalling these facts quickly, yet he cannot write a clear concise sentence; Another person can invent Quantum Physics, yet think Hitler is a wonderful person; I can find calculus easy, but statistics hard. You may be able to remember a thousand faces, but only a few names. If smart was defined in concrete, report cards covering six subjects would always contain six A's, or six F's.

How would a rat researcher define "Smart"? He might be proud to say that his subject, Mr. Rat, could identify over five hundred rat faces based mainly on the kinks, colors, and directions of various rat whiskers. Mr. Rat, who is well traveled, might also be able to identify the more than 1700 specialty cheeses found worldwide, just by a quick sniff. Smart is in the Eye of the Beholder.

I have a theory. I have a theory that I hope can be supported by observations of nature, the nature we see around us. Speaking of observations, this theory is based on two observations. First, Man discovered quantum physics about ninety years ago. We can say that Man has had about a century to figure out quantum physics, to know enough to be able to build practical devices like computers. Second, Life has existed on Earth for 1500 million years. This is 15 million times as long as Man has known about quantum physics.

I have a theory. I have a theory that Life discovered quantum physics long before Man existed. Life discovered that quantum physics could help its creatures survive, that quantum physics made thinking and being self aware possible, and creatures with this trait had a better chance of survival than whatever creatures had come before.

Now is a good time to discuss what we know about quantum computing and how close we are to creating a useful, working quantum computer.

The computers we have today (actually, any device containing a computer chip) are almost exclusively dependent on silicon. This silicon has been grown into a crystal structure. A crystal structure is a unique arrangement of atoms in a crystal. A crystal structure is composed of a group of atoms arranged in a particular way. In the case of silicon destined to be in a computer chip, the group of atoms is periodically repeated in three dimensions on a lattice. At this point, the lattice can be made up of billions of groups. The lattice is sliced into discs called wafers. Intel's web site explains how this process continues: Chips are built simultaneously in a grid formation on the wafer surface - A chip is a complex device that forms the brains of every computing device. While chips look flat, they are three-dimensional structures and may include as many as 30 layers of complex circuitry.

Each of the billions of groups (we can call them "bits") can have a value of "1" (on) or "0" (off) and the values of all the bits can be controlled by the complex circuitry. Modern computers can process (use in calculation or move from one location to another) billions of bits almost instantly. The key word is "almost". When we start to need to process huge numbers of bits, like maybe a million billion bits, we start to need a quantum computer.

With modern computers, we talk about bits. With quantum computers, we talk about qubits. A qubit can have a value of "1" (on), "0" (off), or a third value which may be viewed as a mixture of the first two. I will call this value "maybe".

I think and hope I can explain quantum computers without fully understanding what a qubit is. If I had access to a friendly quantum computer expert, he might help me gain a better understanding. Lacking this access, let me describe why I am confused.

My confusion is rooted in two things I have discussed before and which I will briefly summarize below, and a quantum concept called superposition.

The University of Waterloo is a highly ranked Canadian University. From Its Website: Superposition is essentially the ability of a quantum system to be in multiple states at the same time — that is, something can be “here” and “there,” or “up” and “down” at the same time.

One of the things I'd like to summarize quickly is what started quantum physics in the first place - no one could determine exactly where an individual electron would be. If you fired an electron at a metal pole, the electron might hit the pole, missed the pole, or glance off the pole at different angles. You could only give probabilities of where an individual electron would go.

The other thing I'd like to summarize quickly is the famous (to quantum physicists and lay people with an interest in science) Schrodinger's cat experiment. Schrodinger's cat was an imaginary cat that Schrodinger put into a box. Then a random, unpredictable event either killed the cat or left left it alive. The only way to know if the cat is alive or dead is to open the box. Quantum Physics says that, before the box is opened, the cat is in some kind of limbo, maybe both alive and dead, or half alive and half dead, maybe in the "maybe" state I mentioned above.

I also need to introduce one more quantum concept that applies to quantum computers, decoherence. A qubit, unlike a digit, is a quantum concept. Complex electronic circuitry that helps define modern computer chips can easily determine the value of a digit, whether it has a value of "1" (on) or "0" (off). A qubit, on the other hand, is very, very shy. If you even look at a qubit (for example, with electronic circuitry), it will lose its quantumness. When a qubit loses or falls out of its quantum state, quantum scientists say it has decohered. Quantum scientists do not want qubits to undergo decoherence. They seek to make practical use of qubits without looking at them.

Now let me explain my confusion about qubits. Suppose we have four different kinds of digits (where we are ignoring that the "di" part of the word means two). We have digit-1, digit-2, digit-3, and digit-4.

Digit-1 has one value, either a “1”, a “0”, or a “maybe”. If the value of digit-1 is “1” or “0”, modern computers are not possible. Every calculation gives the same answer , “1” or “0”: 1 + 1 = 1, 1 – 1 = 1, 0 + 0 = 0, 0 – 0 = 0. If the value is “maybe”, the circuitry of the modern computer will make each digit-1 decohere into either a “1” or “0”. In this case, digit-1 is just like digit-2.

Digit-3 is what is being described as a three valued kind of digit which is the definition of the qubit, but it is again just digit-1 with one value of either “1”, “0” or “maybe”. As with digit-1, the quantum scientists have endeavored against great odds to not scare the shy “maybe” less it undergoes the dreaded decoherence. In this case, I have to wonder, although we can never look to see, what this “maybe” becomes. Is it, as in the Schrodinger live cat, dead cat dicotomy, a “1” or a “0”, and does it matter?

Consider now digit-4. It is again digit-1, but this time we are using another “maybe”, a “maybe-2”. Maybe-2 would be from the world of electron experiments where the shy maybe-2 becomes one of an infinite number of values between “1” and “0”, for example “.0587”.

I suspect the quantum scientists are saying that “maybe” and “maybe-2” become the same thing when enough qubits are processed. Nevertheless, I would like to ask the question: If you designed a quantum computer based on a qubit with two values, a “maybe” and a “maybe-2”, would you have a better quantum computer, a super quantum computer?

Before continuing this line of thought (Did you really think I was finished?), we should discuss another quantum concept. This was another quantum idea that really made Albert Einstein mad. Einstein's theories and thus his reputation was built on the idea that the speed of light was constant and nothing could go faster than the speed of light. Einstein believed that any scientist that suggested that something could go faster than light speed was not a scientist, he was an idiot.

So Albert Einstein was mad when a quantum idiot introduced the concept of Entanglement. From the University of Waterloo's website: Entanglement is an extremely strong correlation that exists between quantum particles — so strong, in fact, that two or more quantum particles can be inextricably linked in perfect unison, even if separated by great distances. The particles remain perfectly correlated even if separated by great distances. The particles are so intrinsically connected, they can be said to “dance” in instantaneous, perfect unison, even when placed at opposite ends of the universe. This seemingly impossible connection inspired Einstein to describe entanglement as “spooky action at a distance.”

My next task is to tie these quantum concepts and my thoughts to current work on achieving functional and useful quantum computers. Almost everyone agrees that it would be good to have faster computers. It is not as easy as you would think to describe how research in this area stands, especially without getting into mind numbing math related to exotic systems (what kind of systems? - varied and complex).

One of my first steps was to read an article published last year by Wired Magazine entitled "The Revolutionary Quantum Computer That May Not Be Quantum at All". This article ties a small Canadian Company, D-Wave Systems, Inc., to Google, IBM, and Conagra Foods, three very large companies. D-Wave claims to have developed a computer chip containing 512 qubits. This chip is the heart of D-Wave's Quantum Computer. As the title of the Wired's article implies, there is controversy over whether or not this is really a quantum computer, but Google was impressed enough to buy one, probably paying D-Wave about ten million dollars.

To test the power of the D-Wave computer and perhaps determine if it was a quantum computer or just a regular computer, it was given a problem that normal computers regularly solve. IBM has a program called CPLEX. ConAgra uses this program to crunch global market and weather data to find the optimum price at which to sell flour. CPLEX running on a normal Intel Chip based computer took 30 minutes to find the answer. The D-Wave found the answer in less than a second. It was 3,600 times as fast.

The D-Wave computer is a special purpose computer. It has to be programmed to accomplish a limited number of tasks. A cell phone is a general purpose computer - it can take pictures, send emails, text, play games. We need, but are a long way from having, a general purpose quantum computer.

Reliable quantum computers would revolutionize research in many fields and lead to numerous technological advances. I can mention some, without mathematical references, or even long winded definitions. Quantum computers could break the security keys that keep our financial transactions safe as we buy and sell on the Internet. To make up for this, quantum computers could develop new security keys that could never be broken. Quantum computers could process possible reactions between molecules and atoms, forecasting new and better superconducting materials that operate at room temperature - short winded definition: superconducting materials can conduct electricity over a long distance with no resistance and thus no loss of power. Calculations performed on a quantum computer can theoretically take seconds to complete, while the same calculations performed on a silicon based processor could take years.

To understand our current status regarding quantum computing, I want to review in more detail what quantum scientists, in general, are doing, and, in particular, the progress and setbacks of the D-Wave Scientists since the company began early this century.

The current, primary goal of quantum scientists is to develop reliable qubits. To quote again from the IQC Website"... we need qubits that behave the way we want them to. These qubits could be made of photons, atoms, electrons, molecules or perhaps something else. Scientists at IQC are researching a large array of them as potential bases for quantum computers. But qubits are notoriously tricky to manipulate, since any disturbance causes them to fall out of their quantum state (or “decohere”). Decoherence is the Achilles heel of quantum computing, but it is not insurmountable. The field of quantum error correction examines how to stave off decoherence and combat other errors. Every day, researchers at IQC and around the world are discovering new ways to make qubits cooperate.".

The IQC Website also reports that a joint effort between their researchers and scientist from MIT have produced a (computer) experiment where twelve qubits are used together. This is a world record - at least, according to IQC. I will call this the "mainstream science view". D-Wave Systems claims that the computers they are selling each have a chip that contains 512 qubits. Their newest chip has more. Obviously, D-Wave scientists are not mainstream.

The D-Wave Computer is a ten foot high black box designed to keep the tiny quantum chip within very, very cold. In fact, the temperature is close to absolute zero. The reason cold is needed is that almost anything (heat, vibration, electromagnetic noise, a stray atom) can cause a quantum system to undergo decoherence (a bad thing). The heart of the chip is a group of 512 niobium loops which when chilled sufficiently exhibit quantum-mechanical behavior, becoming, in effect, qubits.

Niobium is a soft, grey, ductile metal. The niobium loops are small by our standard, less than a millimeter and barely visible. They are huge, however, relative to computer components. D-Wave selected them for its computer, in part, because they could easily be mass produced by a regular microchip fabrication laboratory.

When a niobium loop is cooled to almost absolute zero, two magnetic fields form that ran around the loop in two opposite directions at the same time. In physics, electricity and magnetism are the same thing - the fields can be interpreted as electrons in a superposition state (a quantum concept). If the loop exhibits other quantum properties, it can serve as a qubit. The qubits on the chip can, D-Wave hoped, be connected by quantum tunneling and entanglement. The wires needed to connect the components on the chip and the optical fiber cable that transmitted information to the outside world were engineered to stave off decoherence.

When the D-Wave computer was being designed, most "mainstream" scientists were pursuing a "gate model" quantum architecture where qubits were placed on a chip to form standard logic gates like on regular computer circuits -the ands, ors, nots, and so on that assemble into how a computer thinks. The D-Wave Scientists decided to pursue a different architecture - one they felt would lead to a more robust chip, that is, less subject to decoherence. It would still allow their computer to solve optimization problems which were very important, but they would not have a general purpose quantum computer.

The question was "Did they have a quantum computer at all?". And so the fun began.

In 2010, D-Wave landed its first customer, Lockheed Martin. In 2013, Google and NASA were potential new customers of D-Wave. NASA wanted to determine the best route for its Mars Rover to follow as it explored Mars - a classic, difficult optimization problem that could be handled by D-Wave's computer. Before they would buy, however, Google and NASA demanded benchmark tests.

A benchmark test, in this case, is basically running a computer program on one computer - a standard desktop silicon based computer, and then running the same program again on another computer - the D-Wave Quantum Computer. If the D-Wave Quantum Computer ran the program faster and finished much more quickly than the standard computer, it was most likely really a quantum computer. If, on the other hand, for whatever reason, its niobium qubits had suffered decoherence, D-Wave's computer would have become a very expensive, standard desktop computer. Its performance in a benchmark test would not be substantially different from the performance of the silicon based computer.

Three benchmark tests were performed. One, mentioned above, used the IBM CPLEX program. D-Wave passed with flying colors. For the other two benchmark tests, the results, unfortunately, were less clear cut - but Google and NASA bought D-Wave's machine.

Mainstream Quantum Scientists with their 12 qubits experiment had been skeptical of the D-Wave 512 qubit claim. With the Lockheed Martin purchase in 2011, the mainstreamers could, for the first time, run unbiased benchmark tests. Most of these tests showed that standard, non-quantum, special purpose (not general purpose) computers would perform just as well as the D-Wave. It is worth noting that some of these results were available when, in 2013, the Google - NASA purchase was made.

D-Wave responded that the benchmark tests were really not unbiased. They had reasons why the tests were unfair. In addition, the mainstreamers had been very thorough, running many tests. On some of these, the D-Wave computer had performed well. D-Wave scientists wanted to know why - did these particular tests show less decoherence. Could these results help them make their quantum computer better.

D-Wave still has the support of Google and NASA. They now have a D-Wave 2X Computer which sports 1,097 Qubits. From a Tech Times article published in December 2015: "Google Director of Engineering Hartmut Neven said that what the D-Wave 2X can process in a span of a second is something that a single-core classical computer can solve in a span of 10,000 years.". The mainstreamers immediately, enthusiastically, and vehemently attacked the claim.

This war about the number, kind, and uses of qubits is very informative. When you look at the big picture, however, you may feel, as I do, that we have only scratched the surface. We need to know a lot, lot more before we can hope for major advances in the field of quantum computing.

For 1500 million years, Life has been moving molecules, atoms, and maybe even sub-atomic particles around, with the, perhaps blind, goal of helping each individual creature survive and reproduce.

When we look at particle physics, we see an IBM researcher move twenty atoms to spell "IBM", while within his body, Life moves billions of atoms so that the researcher can stand there, look at his work, and think about what he has done.

When we look at quantum computing, we see scientists battling each other over how many qubits they have created, while having no real understanding of even what a qubit is. They seem a long way from understanding their mysteries and being able to build practical devices that work in the "real" world. Maybe if quantum computer experts would turn the limited, working tools they have on living cells, they could learn enough to make real progress.

The first bee took to the air 130 million years ago. The first tennis racket was produced 142 years ago. Even today, you do not see a tennis racket on every street corner. It is a safe bet that the wood bees on my son's patio had never seen a man carrying a tennis racket. Instinct, whatever that is, could not have told the bees to beware. Only the death of a wood bee made the survivors realize: man plus tennis racket equals danger.

Ancient wood bees liked to tunnel into wood. They had the teeth, or whatever, for it. Each bee was proud of the perfect round entrance and knew his particular tunnel would be the best home for his future offspring.

A wood bee flying around a young Abe Lincoln's Log Cabin a couple of centuries ago would have felt the same. The neurons in his wood bee brain had changed little from those that lived in his ancient ancestor. When he saw a gangly young man walking around, he felt anger. He could imagine the giant monster stepping on him or eating his mate. Wood bees can't sell. They can't speak English. The only way to convince the monster to leave was buzz angrily around his head. And it usually worked.

When my son served a wood bee into the backyard, the surviving wood bees anger probably increased, but a new emotion, fear, instantly appeared. To any intelligent creature, the solution was obvious - fly close to the beams, don't give the monster a clear shot.

More THINKING LONGER AND HARDER. - Mike Stewart - mike@esearchfor.com