Thursday, 28 May 2009

Everything you always wanted to know about female ejaculation (but were afraid to ask)


Sharon Moalem is an evolutionary biologist and neurogeneticist at the Mount Sinai School of Medicine in New York City. His book, How Sex Works, was published this month in the US by HarperCollins

WHEN the British Board of Film Classification ordered 6 minutes and 12 seconds of material cut from British Cum Queens in 2002, they found themselves under attack from an unlikely quarter: a group of feminists.

The offending segment showed some of the female participants apparently ejaculating fluid from their genitals on orgasm. The film board stated that female ejaculation did not exist, so the actresses must have been urinating. And urinating on another actor on film is banned under the UK's Obscene Publications Act.

The group Feminists Against Censorship marshalled all the scientific evidence they could find to prove that some women do in fact ejaculate. The film board eventually backed down from its complete denial of the phenomenon, stating that female ejaculation was a "controversial and much debated area".

It was only a partial climbdown, however, as the film board insisted that the scenes in question were "nothing other than straightforward urination masquerading as ejaculation". In their defence, most pornography scenes that depict women ejaculating are indeed staged. Either the fluid is put into the vagina beforehand off-camera, or the actresses are simply urinating.

The dispute raises an intriguing question. In the 21st century, when human biology has been investigated right down to the genetic level, how can the existence of female ejaculation still be open to debate?

Medical textbooks are silent on this aspect of female physiology and most physicians never learn anything of it, unless of course they experience it themselves or witness it in their partners.

In the past few years, however, there has been an upsurge of research into the female sexual response. It seems that, even today, the human body may be harbouring a few surprises.

Although still controversial, many scientists now accept that some women can ejaculate some kind of fluid during sexual arousal or orgasm. Just how common it is, what the fluid is, and whether it serves any kind of function are some of the most hotly debated questions of sex research today, and I am playing a small part in helping to investigate them.

Many historical texts, such as the Kama Sutra, spoke about female "semen", as did writers, including the Greek physician Hippocrates. Sometimes the writers may have been referring to everyday vaginal secretions, which increase during sexual arousal. However, there are several references to something more akin to ejaculation. In the 17th century, the Dutch physician and anatomist Regnier De Graaf spoke of "liquid as usually comes from the pudenda in one gush".

In the last century, Ernst Gräfenberg, the German doctor who gave his name to the controversial G spot, drew attention to female ejaculation in a 1950 paper published in The International Journal of Sexology. "This convulsory expulsion of fluids occurs always at the acme of orgasm and simultaneously with it," he wrote. "Occasionally the production of fluids is so profuse that a large towel has to be spread under the woman to prevent the bed sheets getting soiled."

Most people did not take the paper seriously and thought Gräfenberg was probably describing a type of incontinence. It is certainly true that a few women experience loss of bladder control during sex, sometimes at the moment of penetration or at orgasm. But some who end up being investigated and even surgically treated for such "coital incontinence" may in fact be experiencing ejaculation. (And probably some who think they ejaculate may in fact be leaking urine.)


Ground-breaker


It is unknown how common genuine female ejaculation might be, or even whether it occurs solely on orgasm or merely during heightened sexual arousal. Just as with men's semen, women who believe that they are ejaculating report great variation in the nature and volume of the fluid produced. It can range from clear to milky-white in colour, and the amount of fluid can range from a few drops to more than a quarter of a cup.

The real ground-breaker came in 1981, when renowned US sexologists Beverly Whipple and John Perry published a case report of a woman apparently happy to ejaculate under laboratory conditions. Watched by a team of researchers, the woman was vaginally stimulated by her husband until she reached orgasm, climaxed, and then ejaculated, releasing noticeable amounts of fluid.

According to Whipple, when Philadelphia gynaecologist Martin Weisberg saw their report he said: "Bull... I spend half my waking hours examining, cutting apart, putting together, removing or rearranging female reproductive organs... Women don't ejaculate."

In response, Whipple offered to set Weisberg up with a personal demonstration. The following is Weisberg's description of what he witnessed, which was later included in Whipple and her co-author's bestselling book, The G Spot and Other Recent Discoveries About Human Sexuality: "The subject seemed to perform a Valsalva manoeuvre [bearing down as if starting to defecate] and seconds later several cc's of milky fluid shot out of the urethra."

Impressive as that demonstration sounds, it is interesting to note that the fluid appeared to emerge from the urethra, the tube that drains urine from the bladder to an exit near the entrance of the vagina (see diagram). Could it have been urine after all?

Not according to chemical analysis of the fluid, carried out by Whipple and a few others since then. They found the ejaculate contained very low levels of urea and creatinine, the two main chemical hallmarks of urine.

One marker it did contain, however, was prostate-specific antigen, or PSA. That's the same chemical produced by the prostate gland in men.

The male prostate is usually around the size of a walnut, weighing about 23 grams. It surrounds the urethra like a doughnut and is encased by a fibromuscular sheet, which contracts during ejaculation to help expel prostatic fluid into the urethra, where it mixes with the other components of semen.

Less widely known is that women have prostate tissue too. And this, it seems, is the best candidate for the source of female ejaculate. Also known as the Skene's glands or the paraurethral glands, in 2001 the Federative Committee on Anatomical Terminology officially renamed these structures the "female prostate".

The female prostate seems to vary in size and shape much more than the male version, with some women lacking any appreciable amount of prostate tissue, according to autopsy studies by Slovakian pathologist Milan Zaviacic. This may explain women's differing experiences.


G spot


If the tissue is there at all it lies next to, or sometimes surrounds, the urethra, which is adjacent to the vagina's anterior wall. In other words, if the woman is lying on her back, the prostate is directly above the uppermost wall of her vagina.

This is roughly the same area as the G spot, the part of the vagina that is particularly sensitive to stimulation, although even the G spot's existence is controversial. Assuming there is such a thing, however, it is beginning to look to many sexologists as if the G spot is just the name for the best place to stimulate a woman's prostate. Variation in the amount of prostate tissue could explain why not all women find stimulation of this area arousing - in other words, whether or not they have a G spot

When anatomy textbooks show the female prostate - and not all do - the gland tissue is sometimes shown with ducts draining fluid to two pinhole-sized openings next to the urethra, just above the vagina. Others, however, suggest there may be as many as 20 ducts, and that they drain into the urethra, near its external opening (as shown above).

One of the more interesting reports on female ejaculation was published in 2007 by a team led by Florian Wimpissinger, an Austrian urologist at Rudolfstiftung Hospital in Vienna (The Journal of Sexual Medicine, vol 4, p 1388).

Two women in their 40s came to the attention of the researchers after they attended a sexual medicine clinic because of "significant fluid expulsion during orgasm". The women agreed to produce samples of the fluid by masturbation in the lab. When analysed, this fluid was found to be chemically distinct from urine, with high PSA and other features more akin to male ejaculate.

Ultrasound scans showed that both women had large prostate glands. One woman's scan showed "a hyperintense structure surrounding the entire length of the urethra", which "closely resembles that of the male prostate", according to the authors. By inserting a fine flexible tube with a camera on the end into the urethra, the researchers could see a duct exiting just inside the entrance to the urethra.

The team has another paper due to be published in The Journal of Sexual Medicine next month, describing how they used MRI scans to investigate the prostate of seven women who attended a urology clinic. In this study, however, the team did not find a correlation between prostate size and the ability to ejaculate. Larger studies of this kind are obviously needed.

Partly thanks to the growing body of research in this area, there seems to be increasing awareness of female ejaculation among the general public. Some sex educators provide workshops claiming to teach women how to ejaculate (as well as how to discover their G spots and have more orgasms).

One question that is rarely addressed, however, is whether female ejaculation has a biological function. Not every aspect of our physiology has to have such a role. For example, men are thought to have nipples only because women need them, and male and female embryos develop from the same body plan. Some have even cited this as the reason why women orgasm. Perhaps female ejaculation has a similar explanation.

On the other hand, it is tempting to speculate about what purpose female ejaculation could fulfil. Whipple and Perry have suggested that female ejaculation evolved to combat infections of the urethra and bladder. Many secretions and fluids produced by the human body, such as saliva, tears, and indeed male ejaculate, are awash with compounds that inhibit the growth of bacteria.

Urinary tract infections are relatively common in women, and sometimes arise from bacteria spread to the urethra during sex. A gush of antimicrobial fluid at the entrance to the urethra around the time of sex might help fight off such bacteria.

Along with my colleagues I am investigating whether female ejaculate contains some of the antimicrobial chemicals present in semen, such as zinc. If so, then this fascinating and long-neglected phenomenon might turn out to be more than just a sexual curiosity.



Thursday, 21 May 2009

Discover Interview: Lisa Randall

One of physics' brightest stars ventures into 10 dimensions, visits other universes, explains gravity, and keeps her sense of humor.
by Corey S. Powell


Starting in earnest a couple of decades ago, a group of physicists began seeking deeper truth in string theory, which holds that the fundamental particles of nature consist of minuscule vibrating strands of energy. Problem is, the theory works well only if the strings vibrate in more than three dimensions. Randall, a theoretical physicist at Harvard University, is a leading light of a second generation of researchers who are taking that idea to an even grander level, envisioning not just tiny strands but huge territories of higher dimension, called branes. She thinks this approach could revolutionize our understanding of gravity and uncork the deepest workings of the universe.

Yet Randall is resolutely down to earth. She chafes at the thought that her ideas should be restricted to the confines of academia, she both respects and swats aside her importance as a woman in a male-dominated field—and then there is that laugh, hearty and throaty, that erupts repeatedly during our conversation. She finds this world rich and comforting and funny. She just wants to give it a little more dimension.


Where did your interest in physics begin?

When I was in school I liked math because all the problems had answers. Everything else seemed very subjective. The teachers in English class would say, "What is the reason that this is an important book?" They'd look for the three good reasons, whereas you might think of some other one. I didn't like the arbitrariness of that. Later I decided that just doing math would drive me crazy. I'd be up all night working on a problem, and I thought, "I can't live the rest of my life like this." [Laughs] I wanted something more connected to the world.


Speaking of staying connected to the world—in your work you imagine extra dimensions, but you still have to live on the same planet as the rest of us. Do you carry around the image of other dimensions in your mind?

It's momentary. In my book I describe a time walking over the Charles River and thinking, "You know, I really do believe there are extra dimensions out there." Sometimes I have a sense of what I'm seeing being a small fraction of what's there. Not always there, but probably more often than I realize. Something will come up, and I'll realize I'm thinking about the world a little differently than my friends.


So you intuitively believe higher dimensions really exist?

I don't see why they shouldn't. In the history of physics, every time we've looked beyond the scales and energies we were familiar with, we've found things that we wouldn't have thought were there. You look inside the atom and eventually you discover quarks. Who would have thought that? It's hubris to think that the way we see things is everything there is.


If there are more than three dimensions out there, how does that change our picture of the universe?

What I'm studying is branes, membranelike objects in higher-dimensional space. Particles could be stuck to a three-dimensional brane, sort of like things could be stuck to the two-dimensional surface of a shower curtain in our three-dimensional space. Maybe electromagnetism spreads out only over three dimensions because it's trapped on a three-dimensional brane. It could be that everything we know is stuck on a brane, except for gravity.


Yet we very clearly see only three dimensions when we look around. Where could the other dimensions be hiding?

The old answer was that the extra dimensions were tiny: If something is sufficiently small, you just don't experience it. That's the way things stood until the 1990s, when Raman Sundrum and I realized you could have an infinite extra dimension if space-time is warped. Then with Andreas Karch, I found something even more dramatic—that we could live in a pocket of three dimensions in a higher-dimensional universe. It could be that where we are it looks as if there's only three dimensions in space, but elsewhere it looks like there's four or even more dimensions in space.


And there could be a whole other universe set up that way?

Possibly. It would be a different universe because, for example, bound orbits [like Earth's path around the sun] work only in three dimensions of space. And the other universe could have different laws of physics. For example, they could have a completely different force that we are immune to. We don't experience that force, and they don't experience, say, electromagnetism. So it could be that we're made of quarks and electrons, while they're made up of totally different stuff. It could be a completely different chemistry, different forces—except for gravity, which we believe would be shared.


What is so special about gravity?

In string theory there are two types of strings, open ones with ends and closed ones that loop around. Open strings are anchored to the surface of a brane, so the particles associated with them are stuck on the brane. If you have an open string associated with the electron, for example, it's on a brane. Gravity is associated with a closed string. It has no end, and there is no mechanism for confining it to a brane. Gravity can spread out anywhere, so it really is different. It can leak out a little into extra dimensions. That can explain why gravity is so weak compared with the other forces. After all, a little magnet can lift a paper clip against the pull of the entire Earth.


Some of these ideas sound, frankly, a bit crazy to the average person. Where do they come from?

One reason people think about extra dimensions is string theory, the hypothesis that fundamental particles are actually oscillations of tiny strands of energy. String theory gives you a way to combine two very different models of the world, quantum mechanics and general relativity. Basically, quantum mechanics applies on atomic scales, and general relativity applies on big scales. We believe there should be a single theory that works over all regimes. String theory does that, but only in a universe that has more than three dimensions of space. More generally, there's stuff we don't understand if there are only three dimensions of space, and some of those questions seem to have answers if there are extra dimensions. Also, no fundamental physical theory singles out three dimensions of space. The theory of gravity allows any number. So it's logical to think what the world would look like if extra dimensions are there.


How will we know if your ideas are right?

Experimentalists will look for what are called Kaluza-Klein particles, which are associated with the hidden dimensions. The Large Hadron Collider [a particle accelerator on the French-Swiss border that will switch on in 2007] could have enough energy to produce these particles. In our theory, Kaluza-Klein particles will decay in the detector—you find the decayed product and you can reconstruct what was there. That would provide very strong evidence of extra dimensions. Maybe within five years we'll know the answers.



These are costly experiments. Do you worry about the public's willingness to support such purely theoretical research?

I'm really concerned about it. If we don't do it now, we'll probably never do it. We've built up the technology; we're at a point where if we don't continue, we'll lose that expertise, and we'll have to start all over again. True, it's expensive, but at the end of the day I believe it will be worth it. It makes a difference in terms of who we are, what we think, how we view the world. These are the kinds of things that get people excited about science, so you have a more educated public.


One of the amazing things about your work is that so much of it comes straight from your imagination, not from rooting around in the laboratory. It seems very much like chalk-and-blackboard research.

Right, the blackboard. Those are the things that seem to strike people, that we have blackboards with equations all over them and that we are talking to each other a lot; we're not just going into our offices and ignoring the rest of the word. But we do just go and think sometimes. Once you're really focused, if you get jogged out of it, you have to go back and really reestablish that. It's like Fred Flintstone and his bowling ball: You don't want to interrupt someone when they're in that state. Then again, sometimes we're just talking and writing together on a piece of paper, and sometimes we're at that blackboard putting ideas back and forth. Our work is all those things. It's reading what other people have done, trying to puzzle through something, getting stuck, getting unstuck, trying to find different ways around a problem.


You don't exactly fit the image of the graying, tweedy professor. Does being a young woman in a male-dominated field carry special responsibilities?

If only I was still young! [Laughs] I thought maybe I'd make it all the way through an interview without having to talk about this. But, yeah, I think it does. I'm probably more careful, and probably I spend more time on this particular issue. Also, in writing my book, I felt it had better be good, because there aren't that many women in the field, and I thought it would be subject to extra scrutiny. So there is extra responsibility; the flip side is that potentially there's extra reward if it draws a more diverse group into physics.


Outside your own area of research, where do you see the most vibrant things happening in science today?

Neuroscience is exciting. Understanding how thoughts work, how connections are made, how the memory works, how we process information, how information is stored—it's all fascinating. Experimentally, though, we're still rather limited in what we can do. I don't even know what consciousness is. I'd like someone to define consciousness.


Many people would say physics has a long way to go too. Does it bother you that the things you're excited about now may seem quaint as soon as someone comes up with a better theory?

True, we haven't found all the answers, but we've found some and we're finding more. The fact that we don't know everything doesn't mean we know nothing. People have asked me, "Why bother, if you don't get final answers?" I said, "If someone gave me a dessert, and I knew it wasn't the best dessert ever, I would still be really happy to eat it and wait for the next one."


Will physics ever be able to tackle the biggest questions—for instance, why does the universe even bother to exist?

Science is not religion. We're not going to be able to answer the "why" questions. But when you put together all of what we know about the universe, it fits together amazingly well. The fact that inflationary theory [the current model of the Big Bang] can be tested by looking at the cosmic microwave background is remarkable to me. That's not to say we can't go further. I'd like to ask: Do we live in a pocket of three-dimensional space and time? We're asking how this universe began, but maybe we should be asking how a larger, 10-dimensional universe began and how we got here from there.


This sounds like your formula for keeping science and religion from fighting with each other.

A lot of scientists take the Stephen Jay Gould approach: Religion asks questions about morals, whereas science just asks questions about the natural world. But when people try to use religion to address the natural world, science pushes back on it, and religion has to accommodate the results. Beliefs can be permanent, but beliefs can also be flexible. Personally, if I find out my belief is wrong, I change my mind. I think that's a good way to live.


So does your science leave space for untestable faith? Do you believe in God?

There's room there, and it could go either way. Faith just doesn't have anything to do with what I'm doing as a scientist. It's nice if you can believe in God, because then you see more of a purpose in things. Even if you don't, though, it doesn't mean that there's no purpose. It doesn't mean that there's no goodness. I think that there's a virtue in being good in and of itself. I think that one can work with the world we have. So I probably don't believe in God. I think it's a problem that people are considered immoral if they're not religious. That's just not true. This might earn me some enemies, but in some ways they may be even more moral. If you do something for a religious reason, you do it because you'll be rewarded in an afterlife or in this world. That's not quite as good as something you do for purely generous reasons.

The Biocentric Universe Theory: Life Creates Time, Space, and the Cosmos Itself

Stem-cell guru Robert Lanza presents a radical new view of the universe and everything in it.
by Robert Lanza and Bob Berman



The farther we peer into space, the more we realize that the nature of the universe cannot be understood fully by inspecting spiral galaxies or watching distant supernovas. It lies deeper. It involves our very selves.

This insight snapped into focus one day while one of us (Lanza) was walking through the woods. Looking up, he saw a huge golden orb web spider tethered to the overhead boughs. There the creature sat on a single thread, reaching out across its web to detect the vibrations of a trapped insect struggling to escape. The spider surveyed its universe, but everything beyond that gossamer pinwheel was incomprehensible. The human observer seemed as far-off to the spider as telescopic objects seem to us. Yet there was something kindred: We humans, too, lie at the heart of a great web of space and time whose threads are connected according to laws that dwell in our minds.

Is the web possible without the spider? Are space and time physical objects that would continue to exist even if living creatures were removed from the scene?

Figuring out the nature of the real world has obsessed scientists and philosophers for millennia. Three hundred years ago, the Irish empiricist George Berkeley contributed a particularly prescient observation: The only thing we can perceive are our perceptions. In other words, consciousness is the matrix upon which the cosmos is apprehended. Color, sound, temperature, and the like exist only as perceptions in our head, not as absolute essences. In the broadest sense, we cannot be sure of an outside universe at all.

For centuries, scientists regarded Berkeley’s argument as a philosophical sideshow and continued to build physical models based on the assumption of a separate universe “out there” into which we have each individually arrived. These models presume the existence of one essential reality that prevails with us or without us. Yet since the 1920s, quantum physics experiments have routinely shown the opposite: Results do depend on whether anyone is observing. This is perhaps most vividly illustrated by the famous two-slit experiment. When someone watches a subatomic particle or a bit of light pass through the slits, the particle behaves like a bullet, passing through one hole or the other. But if no one observes the particle, it exhibits the behavior of a wave that can inhabit all possibilities—including somehow passing through both holes at the same time.

Some of the greatest physicists have described these results as so confounding they are impossible to comprehend fully, beyond the reach of metaphor, visualization, and language itself. But there is another interpretation that makes them sensible. Instead of assuming a reality that predates life and even creates it, we propose a biocentric picture of reality. From this point of view, life—particularly consciousness—creates the universe, and the universe could not exist without us.

MESSING WITH THE LIGHT
Quantum mechanics is the physicist’s most accurate model for describing the world of the atom. But it also makes some of the most persuasive arguments that conscious perception is integral to the workings of the universe. Quantum theory tells us that an unobserved small object (for instance, an electron or a photon—a particle of light) exists only in a blurry, unpredictable state, with no well-defined location or motion until the moment it is observed. This is Werner Heisenberg’s famous uncertainty principle. Physicists describe the phantom, not-yet-manifest condition as a wave function, a mathematical expression used to find the probability that a particle will appear in any given place. When a property of an electron suddenly switches from possibility to reality, some physicists say its wave function has collapsed.

What accomplishes this collapse? Messing with it. Hitting it with a bit of light in order to take its picture. Just looking at it does the job. Experiments suggest that mere knowledge in the experimenter’s mind is sufficient to collapse a wave function and convert possibility to reality. When particles are created as a pair—for instance, two electrons in a single atom that move or spin together—physicists call them entangled. Due to their intimate connection, entangled particles share a wave function. When we measure one particle and thus collapse its wave function, the other particle’s wave function instantaneously collapses too. If one photon is observed to have a vertical polarization (its waves all moving in one plane), the act of observation causes the other to instantly go from being an indefinite probability wave to an actual photon with the opposite, horizontal polarity—even if the two photons have since moved far from each other.

In 1997 University of Geneva physicist Nicolas Gisin sent two entangled photons zooming along optical fibers until they were seven miles apart. One photon then hit a two-way mirror where it had a choice: either bounce off or go through. Detectors recorded what it randomly did. But whatever action it took, its entangled twin always performed the complementary action. The communication between the two happened at least 10,000 times faster than the speed of light. It seems that quantum news travels instantaneously, limited by no external constraints—not even the speed of light. Since then, other researchers have duplicated and refined Gisin’s work. Today no one questions the immediate nature of this connectedness between bits of light or matter, or even entire clusters of atoms.

Before these experiments most physicists believed in an objective, independent universe. They still clung to the assumption that physical states exist in some absolute sense before they are measured.

All of this is now gone for keeps.

WRESTLING WITH GOLDILOCKS
The strangeness of quantum reality is far from the only argument against the old model of reality. There is also the matter of the fine-tuning of the cosmos. Many fundamental traits, forces, and physical constants—like the charge of the electron or the strength of gravity—make it appear as if everything about the physical state of the universe were tailor-made for life. Some researchers call this revelation the Goldilocks principle, because the cosmos is not “too this” or “too that” but rather “just right” for life.

One is simply to argue for incredible coincidence. Another is to say, “God did it,” which explains nothing even if it is true.

The third explanation invokes a concept called the anthropic principle,? first articulated by Cambridge astrophysicist Brandon Carter in 1973. This principle holds that we must find the right conditions for life in our universe, because if such life did not exist, we would not be here to find those conditions. Some cosmologists have tried to wed the anthropic principle with the recent theories that suggest our universe is just one of a vast multitude of universes, each with its own physical laws. Through sheer numbers, then, it would not be surprising that one of these universes would have the right qualities for life. But so far there is no direct evidence whatsoever for other universes.

The final option is biocentrism, which holds that the universe is created by life and not the other way around. This is an explanation for and extension of the participatory anthropic principle described by the physicist John Wheeler, a disciple of Einstein’s who coined the terms wormhole and black hole.

SEEKING SPACE AND TIME
Even the most fundamental elements of physical reality, space and time, strongly support a biocentric basis for the cosmos.

According to biocentrism, time does not exist independently of the life that notices it. The reality of time has long been questioned by an odd alliance of philosophers and physicists. The former argue that the past exists only as ideas in the mind, which themselves are neuroelectrical events occurring strictly in the present moment. Physicists, for their part, note that all of their working models, from Isaac Newton’s laws through quantum mechanics, do not actually describe the nature of time. The real point is that no actual entity of time is needed, nor does it play a role in any of their equations. When they speak of time, they inevitably describe it in terms of change. But change is not the same thing as time.

To measure anything’s position precisely, at any given instant, is to lock in on one static frame of its motion, as in the frame of a film. Conversely, as soon as you observe a movement, you cannot isolate a frame, because motion is the summation of many frames. Sharpness in one parameter induces blurriness in the other. Imagine that you are watching a film of an archery tournament. An archer shoots and the arrow flies. The camera follows the arrow’s trajectory from the archer’s bow toward the target. Suddenly the projector stops on a single frame of a stilled arrow. You stare at the image of an arrow in midflight. The pause in the film enables you to know the position of the arrow with great accuracy, but you have lost all information about its momentum. In that frame it is going nowhere; its path and velocity are no longer known. Such fuzziness brings us back to Heisenberg’s uncertainty principle, which describes how measuring the location of a subatomic particle inherently blurs its momentum and vice versa.

All of this makes perfect sense from a biocentric perspective. Everything we perceive is actively and repeatedly being reconstructed inside our heads in an organized whirl of information. Time in this sense can be defined as the summation of spatial states occurring inside the mind. So what is real? If the next mental image is different from the last, then it is different, period. We can award that change with the word time, but that does not mean there is an actual invisible matrix in which changes occur. That is just our own way of making sense of things. We watch our loved ones age and die and assume that an external entity called time is responsible for the crime.

There is a peculiar intangibility to space, as well. We cannot pick it up and bring it to the laboratory. Like time, space is neither physical nor fundamentally real in our view. Rather, it is a mode of interpretation and understanding. It is part of an animal’s mental software that molds sensations into multidimensional objects.

Most of us still think like Newton, regarding space as sort of a vast container that has no walls. But our notion of space is false. Shall we count the ways? 1. Distances between objects mutate depending on conditions like gravity and velocity, as described by Einstein’s relativity, so that there is no absolute distance between anything and anything else. 2. Empty space, as described by quantum mechanics, is in fact not empty but full of potential particles and fields. 3. Quantum theory even casts doubt on the notion that distant objects are truly separated, since entangled particles can act in unison even if separated by the width of a galaxy.

UNLOCKING THE CAGE
In daily life, space and time are harmless illusions. A problem arises only because, by treating these as fundamental and independent things, science picks a completely wrong starting point for investigations into the nature of reality. Most researchers still believe they can build from one side of nature, the physical, without the other side, the living. By inclination and training these scientists are obsessed with mathematical descriptions of the world. If only, after leaving work, they would look out with equal seriousness over a pond and watch the schools of minnows rise to the surface. The fish, the ducks, and the cormorants, paddling out beyond the pads and the cattails, are all part of the greater answer.

Recent quantum studies help illustrate what a new biocentric science would look like. Just months? ago, Nicolas Gisin announced a new twist on his entanglement experiment; in this case, he thinks the results could be visible to the naked eye. At the University of Vienna, Anton Zeilinger’s work with huge molecules called buckyballs pushes quantum reality closer to the macroscopic world. In an exciting extension of this work—proposed by Roger Penrose, the renowned Oxford physicist—not just light but a small mirror that reflects it becomes part of an entangled quantum system, one that is billions of times larger than a buckyball. If the proposed experiment ends up confirming Penrose’s idea, it would also confirm that quantum effects apply to human-scale objects.

Biocentrism should unlock the cages in which Western science has unwittingly confined itself. Allowing the observer into the equation should open new approaches to understanding cognition, from unraveling the nature of consciousness to developing thinking machines that experience the world the same way we do. Biocentrism should also provide stronger bases for solving problems associated with quantum physics and the Big Bang. Accepting space and time as forms of animal sense perception (that is, as biological), rather than as external physical objects, offers a new way of understanding everything from the microworld (for instance, the reason for strange results in the two-slit experiment) to the forces, constants, and laws that shape the universe. At a minimum, it should help halt such dead-end efforts as string theory.

Above all, biocentrism offers a more promising way to bring together all of physics, as scientists have been trying to do since Einstein’s unsuccessful unified field theories of eight decades ago. Until we recognize the essential role of biology, our attempts to truly unify the universe will remain a train to nowhere.

Adapted from Biocentrism: How Life and Consciousness Are the Keys to Understanding the True Nature of the Universe, by Robert Lanza with Bob Berman, published by BenBella Books in May 2009.

Beyond the Higgs

Looking for "the smoking gluon"
by Stephen Cass

Although the ATLAS and CMS experiments are focused on the search for the Higgs boson, they are not the only apparatus taking advantage of the huge energies of the Large Hadron Collidor (LHC). By smashing together nuclei of lead—the largest particles the LHC can handle (pdf)—the ALICE experiment will create a quantum-scale fireball 100,000 times the temperature of the core of the sun. This will echo a moment that took place nearly fourteen billion years ago, when time was just a split second old and the universe barely the size of an orange. Temperatures then were so hot that all matter existed in the form of its most basic building blocks—minuscule subatomic particles called quarks. Quarks normally exist only in pairs or triplets, tightly bound by other particles that are aptly named gluons. But in the first eye-blink of existence, individual quarks and gluons floated about freely in the primordial soup.

ALICE’s researchers calculate that each head-on collision at the LHC will spawn a speck of this quark-gluon plasma, which will hang around for less than a trillionth of a trillionth of a second. And then, in accordance with Einstein’s famous equation, E=mc2, the fireball’s energy will be converted into new matter, spraying out more than 10,000 particles into the waiting arms of the detectors. Analyze these particles, and you will get a handle on how the quark-gluon plasma behaves.

Physicists believe that many of the conditions in the universe today were frozen in during that first split second. For instance, one theory holds that when the quark-gluon soup turned into more ordinary matter, it did so in lumps that eventually gave rise to galaxies and clusters of galaxies. Over lunch in the staff cafeteria, theoretician John Ellis explains that this idea has already fallen out of fashion, mainly because the that theory supposed to be a quark-gluon plasma smooth, disconnected gas, but earlier this year, physicists at Brookhaven National Laboratory caught a glimpse of the quark-gluon plasma and discovered that it looks much more like a thick, viscous liquid. If so, how did galaxies get started? ALICE may help find the answer. “If it’s more like treacle,” Ellis says, “surely the gobs of treacle would freeze differently from if it was gas. The Brookhaven results look convincing, but you’d always like better evidence. We really want to see a smoking gun, or in this case a smoking gluon.”

The LHCb experiment seeks to find out why the Big Bang didn’t just create a universe containing nothing but energy. According to standard physics theory, the Big Bang should have created equal amounts of matter and its nemesis, antimatter. Put these two together and they explode in mutual assured destruction, leaving nothing but energy. So why are we here? The LHCb experiment aims to uncover a previously undetected kink in the laws of physics that would explain how enough matter survived to build galaxies, stars, and planets. The idea is to make and study a flood of particles known as B mesons.

B mesons are important because, as they decay into other, more ordinary particles, they display a slight asymmetry: The antimatter versions tend to decay more readily into matter than the reverse. Existing experiments have already spotted a similar feature in the decay of another exotic particle called the kaon, and in certain kinds of B meson. The problem is that the kinds of imbalance seen to date can account for only a tenth of a billionth of the matter that we know is out there. With its super-high energies, LHCb will be able to make many more B mesons, including versions that have not yet been studied. When these particles decay, they could show enough of a matter-antimatter difference to start to explain one of the most basic question of physics, if not philosophy: Why is there something rather than nothing?

Saturday, 16 May 2009

Is Quantum Mechanics Controlling Your Thoughts?

Science's weirdest realm may be responsible for photosynthesis, our sense of smell, and even consciousness itself.
by Mark Anderson



Graham Fleming sits down at an L-shaped lab bench, occupying a footprint about the size of two parking spaces. Alongside him, a couple of off-the-shelf lasers spit out pulses of light just millionths of a billionth of a second long. After snaking through a jagged path of mirrors and lenses, these minuscule flashes disappear into a smoky black box containing proteins from green sulfur bacteria, which ordinarily obtain their energy and nourishment from the sun. Inside the black box, optics manufactured to billionths-of-a-meter precision detect something extraordinary: Within the bacterial proteins, dancing electrons make seemingly impossible leaps and appear to inhabit multiple places at once.

Peering deep into these proteins, Fleming and his colleagues at the University of California at Berkeley and at Washington University in St. Louis have discovered the driving engine of a key step in photosynthesis, the process by which plants and some microorganisms convert water, carbon dioxide, and sunlight into oxygen and carbohydrates. More efficient by far in its ability to convert energy than any operation devised by man, this cascade helps drive almost all life on earth. Remarkably, photosynthesis appears to derive its ferocious efficiency not from the familiar physical laws that govern the visible world but from the seemingly exotic rules of quantum mechanics, the physics of the subatomic world. Somehow, in every green plant or photosynthetic bacterium, the two disparate realms of physics not only meet but mesh harmoniously. Welcome to the strange new world of quantum biology.

On the face of things, quantum mechanics and the biological sciences do not mix. Biology focuses on larger-scale processes, from molecular interactions between proteins and DNA up to the behavior of organisms as a whole; quantum mechanics describes the often-strange nature of electrons, protons, muons, and quarks—the smallest of the small. Many events in biology are considered straightforward, with one reaction begetting another in a linear, predictable way. By contrast, quantum mechanics is fuzzy because when the world is observed at the subatomic scale, it is apparent that particles are also waves: A dancing electron is both a tangible nugget and an oscillation of energy. (Larger objects also exist in particle and wave form, but the effect is not noticeable in the macroscopic world.)

Quantum mechanics holds that any given particle has a chance of being in a whole range of locations and, in a sense, occupies all those places at once. Physicists describe quantum reality in an equation they call the wave function, which reflects all the potential ways a system can evolve. Until a scientist measures the system, a particle exists in its multitude of locations. But at the time of measurement, the particle has to “choose” just a single spot. At that point, quantum physicists say, probability narrows to a single outcome and the wave function “collapses,” sending ripples of certainty through space-time. Imposing certainty on one particle could alter the characteristics of any others it has been connected with, even if those particles are now light-years away. (This process of influence at a distance is what physicists call entanglement.) As in a game of dominoes, alteration of one particle affects the next one, and so on.

The implications of all this are mind-bending. In the macro world, a ball never spontaneously shoots itself over a wall. In the quantum world, though, an electron in one biomolecule might hop to a second biomolecule, even though classical laws of physics hold that the electrons are too tightly bound to leave. The phenomenon of hopping across seemingly forbidden gaps is called quantum tunneling.

From tunneling to entanglement, the special properties of the quantum realm allow events to unfold at speeds and efficiencies that would be unachievable with classical physics alone. Could quantum mechanisms be driving some of the most elegant and inexplicable processes of life? For years experts doubted it: Quantum phenomena typically reveal themselves only in lab settings, in vacuum chambers chilled to near absolute zero. Biological systems are warm and wet. Most researchers thought the thermal noise of life would drown out any quantum weirdness that might rear its head.

Yet new experiments keep finding quantum processes at play in biological systems, says Christopher Altman, a researcher at the Kavli Institute of Nanoscience in the Netherlands. With the advent of powerful new tools like femtosecond (10-15 second) lasers and nanoscale-precision positioning, life’s quantum dance is finally coming into view.

INTO THE LIGHT
One of the most significant quantum observations in the life sciences comes from Fleming and his collaborators. Their study of photosynthesis in green sulfur bacteria, published in 2007 in Nature [subscription required], tracked the detailed chemical steps that allow plants to harness sunlight and use it to convert simple raw materials into the oxygen we breathe and the carbohydrates we eat. Specifically, the team examined the protein scaffold connecting the bacteria’s external solar collectors, called the chlorosome, to reaction centers deep inside the cells. Unlike electric power lines, which lose as much as 20 percent of energy in transmission, these bacteria transmit energy at a staggering efficiency rate of 95 percent or better.

The secret, Fleming and his colleagues found, is quantum physics.

To unearth the bacteria’s inner workings, the researchers zapped the connective proteins with multiple ultrafast laser pulses. Over a span of femtoseconds, they followed the light energy through the scaffolding to the cellular reaction centers where energy conversion takes place.

Then came the revelation: Instead of haphazardly moving from one connective channel to the next, as might be seen in classical physics, energy traveled in several directions at the same time. The researchers theorized that only when the energy had reached the end of the series of connections could an efficient pathway retroactively be found. At that point, the quantum process collapsed, and the electrons’ energy followed that single, most effective path.

Electrons moving through a leaf or a green sulfur bacterial bloom are effectively performing a quantum “random walk”—a sort of primitive quantum computation—to seek out the optimum transmission route for the solar energy they carry. “We have shown that this quantum random-walk stuff really exists,” Fleming says. “Have we absolutely demonstrated that it improves the efficiency? Not yet. But that’s our conjecture. And a lot of people agree with it.”

Elated by the finding, researchers are looking to mimic nature’s quantum ability to build solar energy collectors that work with near-photosynthetic efficiency. Alán Aspuru-Guzik, an assistant professor of chemistry and chemical biology at Harvard University, heads a team that is researching ways to incorporate the quantum lessons of photosynthesis into organic photovoltaic solar cells. This research is in only the earliest stages, but Aspuru-Guzik believes that Fleming’s work will be applicable in the race to manufacture cheap, efficient solar power cells out of organic molecules.

TUNNELING FOR SMELL
Quantum physics may explain the mysterious biological process of smell, too, says biophysicist Luca Turin, who first published his controversial hypothesis in 1996 while teaching at University College London. Then, as now, the prevailing notion was that the sensation of different smells is triggered when molecules called odorants fit into receptors in our nostrils like three-dimensional puzzle pieces snapping into place. The glitch here, for Turin, was that molecules with similar shapes do not necessarily smell anything like one another. Pinanethiol [C10H18S] has a strong grapefruit odor, for instance, while its near-twin pinanol [C10H18O] smells of pine needles. Smell must be triggered, he concluded, by some criteria other than an odorant’s shape alone.

What is really happening, Turin posited, is that the approximately 350 types of human smell receptors perform an act of quantum tunneling when a new odorant enters the nostril and reaches the olfactory nerve. After the odorant attaches to one of the nerve’s receptors, electrons from that receptor tunnel through the odorant, jiggling it back and forth. In this view, the odorant’s unique pattern of vibration is what makes a rose smell rosy and a wet dog smell wet-doggy.

In the quantum world, an electron from one biomolecule might hop to another, though classical laws of physics forbid it.

In 2007 Turin (who is now chief technical officer of the odorant-designing company Flexitral in Chantilly, Virginia) and his hypothesis received support from a paper by four physicists at University College London. That work, published in the journal Physical Review Letters [subscription required], showed how the smell-tunneling process may operate. As an odorant approaches, electrons released from one side of a receptor quantum-mechanically tunnel through the odorant to the opposite side of the receptor. Exposed to this electric current, the heavier pinanethiol would vibrate differently from the lighter but similarly shaped pinanol.

“I call it the ‘swipe-card model,’?” says coauthor A. Marshall Stoneham, an emeritus professor of physics. “The card’s got to be a good enough shape to swipe through one of the receptors.” But it is the frequency of vibration, not the shape, that determines the scent of a molecule.
THE GREEN TEA PARTY
Even green tea may tie into subtle subatomic processes. In 2007 four biochemists from the Autonomous University of Barcelona announced that the secret to green tea’s effectiveness as an anti-oxidant—a substance that neutralizes the harmful free radicals that can damage cells—may also be quantum mechanical. Publishing their findings in the Journal of the American Chemical Society [subscription required], the group reported that antioxidants called catechins act like fishing trollers in the human body. (Catechins are among the chief organic compounds found in tea, wine, and some fruits and vegetables.)

Free radical molecules, by-products of the body’s breakdown of food or environmental toxins, have a spare electron. That extra electron makes free radicals reactive, and hence dangerous as they travel through the bloodstream. But an electron from the catechin can make use of quantum mechanics to tunnel across the gap to the free radical. Suddenly the catechin has chemically bound up the free radical, preventing it from interacting with and damaging cells in the body.

Quantum tunneling has also been observed in enzymes, the proteins that facilitate molecular reactions within cells. Two studies, one published in Science in 2006 and the other in Biophysical Journal in 2007, have found that some enzymes appear to lack the energy to complete the reactions they ultimately propel; the enzyme’s success, it now seems, could be explained only through quantum means.

QUANTUM TO THE CORE
Stuart Hameroff, an anesthesiologist and director of the Center for Consciousness Studies at the University of Arizona, argues that the highest function of life—consciousness—is likely a quantum phenomenon too. This is illustrated, he says, through anesthetics. The brain of a patient under anesthesia continues to operate actively, but without a conscious mind at work. What enables anesthetics such as xenon or isoflurane gas to switch off the conscious mind?

Hameroff speculates that anesthetics “interrupt a delicate quantum process” within the neurons of the brain. Each neuron contains hundreds of long, cylindrical protein structures, called microtubules, that serve as scaffolding. Anesthetics, Hameroff says, dissolve inside tiny oily regions of the microtubules, affecting how some electrons inside these regions behave.

He speculates that the action unfolds like this: When certain key electrons are in one “place,” call it to the “left,” part of the microtubule is squashed; when the electrons fall to the “right,” the section is elongated. But the laws of quantum mechanics allow for electrons to be both “left” and “right” at the same time, and thus for the microtubules to be both elongated and squashed at once. Each section of the constantly shifting system has an impact on other sections, potentially via quantum entanglement, leading to a dynamic quantum-mechanical dance.

It is in this faster-than-light subatomic communication, Hameroff says, that consciousness is born. Anesthetics get in the way of the dancing electrons and stop the gyration at its quantum-mechanical core; that is how they are able to switch consciousness off.

It is still a long way from Hameroff’s hypothetical (and experimentally unproven) quantum neurons to a sentient, conscious human brain. But many human experiences, Hameroff says, from dreams to subconscious emotions to fuzzy memory, seem closer to the Alice in Wonderland rules governing the quantum world than to the cut-and-dried reality that classical physics suggests. Discovering a quantum portal within every neuron in your head might be the ultimate trip through the looking glass.

Do Naked Singularities Break the Rules of Physics?

The black hole has a troublesome sibling, the naked singularity. Physicists have long thought--hoped--it could never exist. But could it?
From the February 2009 Scientific American Magazine
By Pankaj S. Joshi


Modern science has introduced the world to plenty of strange ideas, but surely one of the strangest is the fate of a massive star that has reached the end of its life. Having exhausted the fuel that sustained it for millions of years, the star is no longer able to hold itself up under its own weight, and it starts collapsing catastrophically. Modest stars like the sun also collapse, but they stabilize again at a smaller size. Whereas if a star is massive enough, its gravity overwhelms all the forces that might halt the collapse. From a size of millions of kilometers across, the star crumples to a pinprick smaller than the dot on an "i."

Most physicists and astronomers think the result is a black hole, a body with such intense gravity that nothing can escape from its immediate vicinity. A black hole has two parts. At its core is a singularity, the infinitesimal point into which all the matter of the star gets crushed. Surrounding the singularity is the region of space from which escape is impossible, the perimeter of which is called the event horizon. Once something enters the event horizon, it loses all hope of exiting. Whatever light the falling body gives off is trapped, too, so an outside observer never sees it again. It ultimately crashes into the singularity.

But is this picture really true? The known laws of physics are clear that a singularity forms, but they are hazy about the event horizon. Most physicists operate under the assumption that a horizon must indeed form, if only because the horizon is very appealing as a scientific fig leaf. Physicists have yet to figure out what exactly happens at a singularity: matter is crushed, but what becomes of it then? The event horizon, by hiding the singularity, isolates this gap in our knowledge. All kinds of processes unknown to science may occur at the singularity, yet they have no effect on the outside world. Astronomers plotting the orbits of planets and stars can safely ignore the uncertainties introduced by singularities and apply the standard laws of physics with confidence. Whatever happens in a black hole stays in a black hole.

Yet a growing body of research calls this working assumption into question. Researchers have found a wide variety of stellar collapse scenarios in which an event horizon does not in fact form, so that the singularity remains exposed to our view. Physicists call it a naked singularity. Matter and radiation can both fall in and come out. Whereas visiting the singularity inside a black hole would be a one-way trip, you could in principle come as close as you like to a naked singularity and return to tell the tale.

If naked singularities exist, the implications would be enormous and would touch on nearly every aspect of astrophysics and fundamental physics. The lack of horizons could mean that mysterious processes occurring near the singularities would impinge on the outside world. Naked singularities might account for unexplained high-energy phenomena that astronomers have seen, and they might offer a laboratory to explore the fabric of spacetime on its finest scales.

Event horizons were supposed to have been the easy part about black holes. Singularities are clearly mysterious. They are places where the strength of gravity becomes infinite and the known laws of physics break down. According to physicists' current understanding of gravity, encapsulated in Einstein's general theory of relativity, singularities inevitably arise during the collapse of a giant star. General relativity does not account for the quantum effects that become important for microscopic objects, and those effects presumably intervene to prevent the strength of gravity from becoming truly infinite. But physicists are still struggling to develop the quantum theory of gravity they need to explain singularities.

By comparison, what happens to the region of spacetime around the singularity seems as though it should be rather straightforward. Stellar event horizons are many kilometers in size, far larger than the typical scale of quantum effects. Assuming that no new forces of nature intervene, horizons should be governed purely by general relativity, a theory that is based on well-understood principles and has passed 90 years of observational tests.

That said, applying the theory to stellar collapse is still a formidable task. Einstein's equations of gravity are notoriously complex, and solving them requires physicists to make simplifying assumptions. American physicists J. Robert Oppenheimer and Hartland S. Snyder and, independently, Indian physicist B. Datt made an initial attempt in the late 1930s. To simplify the equations, they considered only perfectly spherical stars, assumed the stars consisted of gas of a homogeneous (uniform) density and neglected gas pressure. They found that as this idealized star collapses, the gravity at its surface intensifies and eventually becomes strong enough to trap all light and matter, thereby forming an event horizon. The star becomes invisible to outside observers and soon thereafter collapses all the way down to a singularity.

Real stars, of course, are more complicated. Their density is inhomogeneous, the gas in them exerts pressure, and they can assume other shapes. Does every sufficiently massive collapsing star turn into a black hole? In 1969 University of Oxford physicist Roger Penrose suggested that the answer is yes. He conjectured that the formation of a singularity during stellar collapse necessarily entails the formation of an event horizon. Nature thus forbids us from ever seeing a singularity, because a horizon always cloaks it. Penrose's conjecture is termed the cosmic censorship hypothesis. It is only a conjecture, but it underpins the modern study of black holes. Physicists hoped we would be able to prove it with the same mathematical rigor we used to show the inevitability of singularities.

That has not happened. Instead of coming up with a direct proof of censorship that applies under all conditions, we have had to embark on the longer route of analyzing case studies of gravitational collapse one by one, gradually embellishing our theoretical models with the features that the initial efforts lacked. In 1973 German physicist Hans J rgen Seifert and his colleagues considered inhomogeneity. Intriguingly, they found that layers of infalling matter could intersect to create momentary singularities that were not covered by horizons. But singularities come in various types, and these ones were fairly benign. Although the density at one location became infinite, the strength of gravity did not, so the singularity did not crush matter and infalling objects to an infinitesimal pinprick. Thus, general relativity never broke down, and matter continued to move through this location rather than meeting its end.

In 1979 Douglas M. Eardley of the University of California, Santa Barbara, and Larry Smarr of the University of Illinois at Urbana-Champaign went a step further and performed a numerical simulation of a star with a realistic density profile: highest at its center and slowly decreasing toward the surface. An exact paper-and-pencil treatment of the same situation, undertaken by Demetrios Christodoulou of the Swiss Federal Institute of Technology in Zurich, followed in 1984. Both studies found that the star shrank to zero size and that a naked singularity resulted. But the model still neglected pressure, and Richard P.A.C. Newman, then at the University of York in England, showed that the singularity was again gravitationally weak.

Inspired by these findings, many researchers, including me, tried to formulate a rigorous theorem that naked singularities would always be weak. We were unsuccessful. The reason soon became clear: naked singularities are not always weak. We found scenarios of inhomogeneous collapse that led to singularities where gravity was strong that is, genuine singularities that could crush matter into oblivion yet remained visible to external observers. A general analysis of stellar collapse in the absence of gas pressure, developed in 1993 by Indresh Dwivedi, then at Agra University, and me, clarified and settled these points.

In the early 1990s physicists considered the effects of gas pressure. Amos Ori of the Technion-Israel Institute of Technology and Tsvi Piran of the Hebrew University of Jerusalem conducted numerical simulations, and my group solved the relevant equations exactly. Stars with a fully realistic relation between density and pressure could collapse to naked singularities. At about the same time, teams led by Giulio Magli of the Polytechnic University of Milan and by Kenichi Nakao of Osaka City University considered a form of pressure generated by rotation of particles within a collapsing star. They, too, showed that in a wide variety of situations, collapse ends in a naked singularity after all.

These studies analyzed perfectly spherical stars, which is not as severe a limitation as it might appear, because most stars in nature are very close to this shape. Moreover, spherical stars have, if anything, more favorable conditions for horizon formation than stars of other shapes do, so if cosmic censorship fails even for them, its prospects look questionable. That said, physicists have been exploring nonspherical collapse. In 1991 Stuart L. Shapiro of the University of Illinois and Saul A. Teukolsky of Cornell University presented numerical simulations in which oblong stars could collapse to a naked singularity. A few years later Andrzej Kr lak of the Polish Academy of Sciences and I studied nonspherical collapse and also found naked singularities. To be sure, both these studies neglected gas pressure.

Skeptics have wondered whether these situations are contrived. Would a slight change to the initial configuration of the star abruptly cause an event horizon to cover the singularity? If so, then the naked singularity might be an artifact of the approximations used in the calculations and would not truly arise in nature. Some scenarios involving unusual forms of matter are indeed very sensitive. But our results so far also show that most naked singularities are stable to small variations of the initial setup. Thus, these situations appear to be what physicists call generic that is, they are not contrived.

These counterexamples to Penrose's conjecture suggest that cosmic censorship is not a general rule. Physicists cannot say, "Collapse of any massive star makes a black hole only," or "Any physically realistic collapse ends in a black hole." Some scenarios lead to a black hole and others to a naked singularity. In some models, the singularity is visible only temporarily, and an event horizon eventually forms to cloak it. In others, the singularity remains visible forever. Typically the naked singularity develops in the geometric center of collapse, but it does not always do so, and even when it does, it can also spread to other regions. Nakedness also comes in degrees: an event horizon might hide the singularity from the prying eyes of faraway observers, whereas observers who fell through the event horizon could see the singularity prior to hitting it. The variety of outcomes is bewildering.

My colleagues and I have isolated various features of these scenarios that cause an event horizon to arise or not. In particular, we have examined the role of inhomogeneities and gas pressure. According to Einstein's theory, gravity is a complex phenomenon involving not only a force of attraction but also effects such as shearing, in which different layers of material are shifted laterally in opposite directions. If the density of a collapsing star is very high so high that by all rights it should trap light but also inhomogeneous, those other effects may create escape routes. Shearing of material close to a singularity, for example, can set off powerful shock waves that eject matter and light in essence, a gravitational typhoon that disrupts the formation of an event horizon.

To be specific, consider a homogeneous star, neglecting gas pressure. (Pressure alters the details but not the broad outlines of what happens.) As the star collapses, gravity increases in strength and bends the paths of moving objects ever more severely. Light rays, too, become bent, and there comes a time when the bending is so severe that light can no longer propagate away from the star. The region where light becomes trapped starts off small, grows and eventually reaches a stable size proportional to the star's mass. Meanwhile because the star's density is uniform in space and varies only in time, the entire star is crushed to a point simultaneously. The trapping of light occurs well before this moment, so the singularity remains hidden.

Now consider the same situation except that the density decreases with distance from the center. In effect, the star has an onionlike structure of concentric shells of matter. The strength of gravity acting on each shell depends on the average density of matter interior to that shell. Because the denser inner shells feel a stronger pull of gravity, they collapse faster than the outer ones. The entire star does not collapse to a singularity simultaneously. The innermost shells collapse first, and then the outer shells pile on, one by one.

The resulting delay can postpone the formation of an event horizon. If the horizon can form anywhere, it will form in the dense inner shells. But if density decreases with distance too rapidly, these shells may not constitute enough mass to trap light. The singularity, when it forms, will be naked. Therefore, there is a threshold: if the degree of inhomogeneity is very small, below a critical limit, a black hole will form; with sufficient inhomogeneity, a naked singularity arises.

In other scenarios, the salient issue is the rapidity of collapse. This effect comes out very clearly in models where stellar gas has converted fully to radiation and, in effect, the star becomes a giant fireball a scenario first considered by Indian physicist P. C. Vaidya in the 1940s in the context of modeling a radiating star. Again there is a threshold: slowly collapsing fireballs become black holes, but if a fireball collapses rapidly enough, light does not become trapped and the singularity is naked.

One reason it has taken so long for physicists to accept the possibility of naked singularities is that they raise a number of conceptual puzzles. A commonly cited concern is that such singularities would make nature inherently unpredictable. Because general relativity breaks down at singularities, it cannot predict what those singularities will do. John Earman of the University of Pittsburgh memorably suggested that green slime and lost socks could emerge from them. They are places of magic, where science fails.

As long as singularities remain safely ensconced within event horizons, this randomness remains contained and general relativity is a fully predictive theory, at least outside the horizon. But if singularities can be naked, their unpredictability would infect the rest of the universe. For example, when physicists applied general relativity to Earth's orbit around the sun, they would in effect have to make allowance for the possibility that a singularity somewhere in the universe could emit a random gravitational pulse and send our planet flying off into deep space.

Yet this worry is misplaced. Unpredictability is actually common in general relativity and not always directly related to censorship violation. The theory permits time travel, which could produce causal loops with unforeseeable outcomes, and even ordinary black holes can become unpredictable. For example, if we drop an electric charge into an uncharged black hole, the shape of spacetime around the hole radically changes and is no longer predictable. A similar situation holds when the black hole is rotating. Specifically, what happens is that spacetime no longer neatly separates into space and time, so physicists cannot consider how the black hole evolves from some initial time into the future. Only the purest of pure black holes, with no charge or rotation at all, is fully predictable.

The loss of predictability and other problems with black holes actually stem from the occurrence of singularities; it does not matter whether they are hidden or not. The solution to these problems probably lies in a quantum theory of gravity, which will go beyond general relativity and offer a full explication of singularities. Within that theory, every singularity would prove to have a high but finite density. A naked singularity would be a "quantum star," a hyperdense body governed by the rules of quantum gravity. What seems random would have a logical explanation.

Another possibility is that singularities may really have an infinite density after all that they are not things to be explained away by quantum gravity but to be accepted as they are. The breakdown of general relativity at such a location may not be a failure of the theory per se but a sign that space and time have an edge. The singularity marks the place where the physical world ends. We should think of it as an event rather than an object, a moment when collapsing matter reaches the edge and ceases to be, like the big bang in reverse.

In that case, questions such as what will come out of a naked singularity are not really meaningful; there is nothing to come out of, because the singularity is just a moment in time. What we see from a distance is not the singularity itself but the processes that occur in the extreme conditions of matter near this event, such as shock waves caused by inhomogeneities in this ultradense medium or quantum-gravitational effects in its vicinity.

In addition to unpredictability, a second issue troubles many physicists. Having provisionally assumed that the censorship conjecture holds, they have spent the past several decades formulating various laws that black holes should obey, and these laws have the ring of deep truths. But the laws are not free of major paradoxes. For example, they hold that a black hole swallows and destroys information which appears to contradict the basic principles of quantum theory [see "Black Holes and the Information Paradox," by Leonard Susskind; Scientific American, April 1997]. This paradox and other predicaments stem from the presence of an event horizon. If the horizon goes away, these problems might go away, too. For instance, if the star could radiate away most of its mass in the late stages of collapse, it would destroy no information and leave behind no singularity. In that case, it would not take a quantum theory of gravity to explain singularities; general relativity might do the trick itself.

Far from considering naked singularities a problem, physicists can see them as an asset. If the singularities that formed in the gravitational collapse of a massive star are visible to external observers, they could provide a laboratory to study quantum-gravitational effects. Quantum gravity theories in the making, such as string theory and loop quantum gravity, are badly in need of some kind of observational input, without which it is nearly impossible to constrain the plethora of possibilities. Physicists commonly seek that input in the early universe, when conditions were so extreme that quantum-gravitational effects dominated. But the big bang was a unique event. If singularities could be naked, they would allow astronomers to observe the equivalent of a big bang every time a massive star in the universe ends its life.

To explore how naked singularities might provide a glimpse into otherwise unobservable phenomena, we recently simulated how a star collapses to a naked singularity, taking into account the effects predicted by loop quantum gravity. According to this theory, space consists of tiny atoms, which become conspicuous when matter becomes sufficiently dense; the result is an extremely powerful repulsive force that prevents the density from ever becoming infinite [see "Follow the Bouncing Universe," by Martin Bojowald; Scientific American, October 2008]. In our model, such a repulsive force dispersed the star and dissolved the singularity. Nearly a quarter of the mass of the star was ejected within the final fraction of a microsecond. Just before it did so, a faraway observer would have seen a sudden dip in the intensity of radiation from the collapsing star a direct result of quantum-gravitational effects.

The explosion would have unleashed high-energy gamma rays, cosmic rays and other particles such as neutrinos. Upcoming experiments such as the Extreme Universe Space Observatory, a module for the International Space Station expected to be operational in 2013, may have the needed sensitivity to see this emission. Because the details of the outpouring depend on the specifics of the quantum gravity theory, observations would provide a way to discriminate among theories.

Either proving or disproving cosmic censorship would create a mini explosion of its own within physics, because naked singularities touch on so many deep aspects of current theories. What comes out unambiguously from the theoretical work so far is that censorship does not hold in an unqualified form, as it is sometimes taken to be. Singularities are clothed only if the conditions are suitable. The question remains whether these conditions could ever arise in nature. If they can, then physicists will surely come to love what they once feared.

20 Things You Didn't Know About... Time

The beginning, the end, and the funny habits of our favorite ticking force.

by LeeAundra Temescu

1 “Time is an illusion. Lunchtime doubly so,” joked Douglas Adams in The Hitchhiker’s Guide to the Galaxy. Scientists aren’t laughing, though. Some speculative new physics theories suggest that time emerges from a more fundamental—and timeless—reality.

2 Try explaining that when you get to work late. The average U.S. city commuter loses 38 hours a year to traffic delays.

3 Wonder why you have to set your clock ahead in March? Daylight Saving Time began as a joke by Benjamin Franklin, who proposed waking people earlier on bright summer mornings so they might work more during the day and thus save candles. It was introduced in the U.K. in 1917 and then spread around the world.

4 Green days. The Department of Energy estimates that electricity demand drops by 0.5 percent during Daylight Saving Time, saving the equivalent of nearly 3 million barrels of oil.

5 By observing how quickly bank tellers made change, pedestrians walked, and postal clerks spoke, psychologists determined that the three fastest-paced U.S. cities are Boston, Buffalo, and New York.

6 The three slowest? Shreveport, Sacramento, and L.A.

7 One second used to be defined as 1/86,400 the length of a day. However, Earth’s rotation isn’t perfectly reliable. Tidal friction from the sun and moon slows our planet and increases the length of a day by 3 milliseconds per century.

8 This means that in the time of the dinosaurs, the day was just 23 hours long.

9 Weather also changes the day. During El Niño events, strong winds can slow Earth’s rotation by a fraction of a millisecond every 24 hours.

10 Modern technology can do better. In 1972 a network of atomic clocks in more than 50 countries was made the final authority on time, so accurate that it takes 31.7 million years to lose about one second.

11 To keep this time in sync with Earth’s slowing rotation, a “leap second” must be added every few years, most recently this past New Year’s Eve.

12 The world’s most accurate clock, at the National Institute of Standards and Technology in Colorado, measures vibrations of a single atom of mercury. In a billion years it will not lose one second.

13 Until the 1800s, every village lived in its own little time zone, with clocks synchronized to the local solar noon.

14 This caused havoc with the advent of trains and timetables. For a while watches were made that could tell both local time and “railway time.”

15 On November 18, 1883, American railway companies forced the national adoption of standardized time zones.

16 Thinking about how railway time required clocks in different places to be synchronized may have inspired Einstein to develop his theory of relativity, which unifies space and time.

17 Einstein showed that gravity makes time run more slowly. Thus airplane passengers, flying where Earth’s pull is weaker, age a few extra nanoseconds each flight.

18 According to quantum theory, the shortest moment of time that can exist is known as Planck time, or 0.0000000000000000000000000000000000000000001 second.

19 Time has not been around forever. Most scientists believe it was created along with the rest of the universe in the Big Bang, 13.7 billion years ago.

20 There may be an end of time. Three Spanish scientists posit that the observed acceleration of the expanding cosmos is an illusion caused by the slowing of time. According to their math, time may eventually stop, at which point everything will come to a standstill.

Friday, 15 May 2009

Large Hadron Collider: The Discovery Machine

A global collaboration of scientists is preparing to start up the greatest particle physics experiment in history

By Graham P. Collins

You could think of it as the biggest, most powerful microscope in the history of science. The Large Hadron Collider (LHC), now being completed underneath a circle of countryside and villages a short drive from Geneva, will peer into the physics of the shortest distances (down to a nano-nanometer) and the highest energies ever probed. For a decade or more, particle physicists have been eagerly awaiting a chance to explore that domain, sometimes called the terascale because of the energy range involved: a trillion electron volts, or 1 TeV. Significant new physics is expected to occur at these energies, such as the elusive Higgs particle (believed to be responsible for imbuing other particles with mass) and the particle that constitutes the dark matter that makes up most of the material in the universe.

The mammoth machine, after a nine-year construction period, is scheduled (touch wood) to begin producing its beams of particles later this year. The commissioning process is planned to proceed from one beam to two beams to colliding beams; from lower energies to the terascale; from weaker test intensities to stronger ones suitable for producing data at useful rates but more difficult to control. Each step along the way will produce challenges to be overcome by the more than 5,000 scientists, engineers and students collaborating on the gargantuan effort. When I visited the project last fall to get a firsthand look at the preparations to probe the high-energy frontier, I found that everyone I spoke to expressed quiet confidence about their ultimate success, despite the repeatedly delayed schedule. The particle physics community is eagerly awaiting the first results from the LHC. Frank Wilczek of the Massachusetts Institute of Technology echoes a common sentiment when he speaks of the prospects for the LHC to produce “a golden age of physics.”

A Machine of Superlatives
To break into the new territory that is the terascale, the LHC’s basic parameters outdo those of previous colliders in almost every respect. It starts by producing proton beams of far higher energies than ever before. Its nearly 7,000 magnets, chilled by liquid helium to less than two kelvins to make them superconducting, will steer and focus two beams of protons traveling within a millionth of a percent of the speed of light. Each proton will have about 7 TeV of energy—7,000 times as much energy as a proton at rest has embodied in its mass, courtesy of Einstein’s E = mc2. That is about seven times the energy of the reigning record holder, the Tevatron collider at Fermi National Accelerator Laboratory in Batavia, Ill. Equally important, the machine is designed to produce beams with 40 times the intensity, or luminosity, of the Tevatron’s beams. When it is fully loaded and at maximum energy, all the circulating particles will carry energy roughly equal to the kinetic energy of about 900 cars traveling at 100 kilometers per hour, or enough to heat the water for nearly 2,000 liters of coffee.

The protons will travel in nearly 3,000 bunches, spaced all around the 27-kilometer circumference of the collider. Each bunch of up to 100 billion protons will be the size of a needle, just a few centimeters long and squeezed down to 16 microns in diameter (about the same as the thinnest of human hairs) at the collision points. At four locations around the ring, these needles will pass through one another, producing more than 600 million particle collisions every second. The collisions, or events, as physicists call them, actually will occur between particles that make up the protons—quarks and gluons. The most cataclysmic of the smashups will release about a seventh of the energy available in the parent protons, or about 2 TeV. (For the same reason, the Tevatron falls short of exploring terascale physics by about a factor of five, despite the 1-TeV energy of its protons and antiprotons.)

Four giant detectors—the largest would roughly half-fill the Notre Dame cathedral in Paris, and the heaviest contains more iron than the Eiffel Tower—will track and measure the thousands of particles spewed out by each collision occurring at their centers. Despite the detectors’ vast size, some elements of them must be positioned with a precision of 50 microns.

The nearly 100 million channels of data streaming from each of the two largest detectors would fill 100,000 CDs every second, enough to produce a stack to the moon in six months. So instead of attempting to record it all, the experiments will have what are called trigger and data-acquisition systems, which act like vast spam filters, immediately discarding almost all the information and sending the data from only the most promising-looking 100 events each second to the LHC’s central computing system at CERN, the European laboratory for particle physics and the collider’s home, for archiving and later analysis.

A “farm” of a few thousand computers at CERN will turn the filtered raw data into more compact data sets organized for physicists to comb through. Their analyses will take place on a so-called grid network comprising tens of thousands of PCs at institutes around the world, all connected to a hub of a dozen major centers on three continents that are in turn linked to CERN by dedicated optical cables.

Journey of a Thousand Steps
In the coming months, all eyes will be on the accelerator. The final connections between adjacent magnets in the ring were made in early November, and as we go to press in mid-December one of the eight sectors has been cooled almost to the cryogenic temperature required for operation, and the cooling of a second has begun. One sector was cooled, powered up and then returned to room temperature earlier in 2007. After the operation of the sectors has been tested, first individually and then together as an integrated system, a beam of protons will be injected into one of the two beam pipes that carry them around the machine’s 27 kilometers.

The series of smaller accelerators that supply the beam to the main LHC ring has already been checked out, bringing protons with an energy of 0.45 TeV “to the doorstep” of where they will be injected into the LHC. The first injection of the beam will be a critical step, and the LHC scientists will start with a low-intensity beam to reduce the risk of damaging LHC hardware. Only when they have carefully assessed how that “pilot” beam responds inside the LHC and have made fine corrections to the steering magnetic fields will they proceed to higher intensities. For the first running at the design energy of 7 TeV, only a single bunch of protons will circulate in each direction instead of the nearly 3,000 that constitute the ultimate goal.

As the full commissioning of the accelerator proceeds in this measured step-by-step fashion, problems are sure to arise. The big unknown is how long the engineers and scientists will take to overcome each challenge. If a sector has to be brought back to room temperature for repairs, it will add months.

The four experiments—ATLAS, ALICE, CMS and LHCb—also have a lengthy process of completion ahead of them, and they must be closed up before the beam commissioning begins. Some extremely fragile units are still being installed, such as the so-called vertex locator detector that was positioned in LHCb in mid-November. During my visit, as one who specialized in theoretical rather than experimental physics many years ago in graduate school, I was struck by the thick rivers of thousands of cables required to carry all the channels of data from the detectors—every cable individually labeled and needing to be painstakingly matched up to the correct socket and tested by present-day students.

Although colliding beams are still months in the future, some of the students and postdocs already have their hands on real data, courtesy of cosmic rays sleeting down through the Franco-Swiss rock and passing through their detectors sporadically. Seeing how the detectors respond to these interlopers provides an important reality check that everything is working together correctly—from the voltage supplies to the detector elements themselves to the electronics of the readouts to the data-acquisition software that integrates the millions of individual signals into a coherent description of an “event.”

All Together Now
When everything is working together, including the beams colliding at the center of each detector, the task faced by the detectors and the data-processing systems will be Herculean. At the design luminosity, as many as 20 events will occur with each crossing of the needlelike bunches of protons. A mere 25 nanoseconds pass between one crossing and the next (some have larger gaps). Product particles sprayed out from the collisions of one crossing will still be moving through the outer layers of a detector when the next crossing is already taking place. Individual elements in each of the detector layers respond as a particle of the right kind passes through it. The millions of channels of data streaming away from the detector produce about a megabyte of data from each event: a petabyte, or a billion megabytes, of it every two seconds.

The trigger system that will reduce this flood of data to manageable proportions has multiple levels. The first level will receive and analyze data from only a subset of all the detector’s components, from which it can pick out promising events based on isolated factors such as whether an energetic muon was spotted flying out at a large angle from the beam axis. This so-called level-one triggering will be conducted by hundreds of dedicated computer boards—the logic embodied in the hardware. They will select 100,000 bunches of data per second for more careful analysis by the next stage, the higher-level trigger.

The higher-level trigger, in contrast, will receive data from all of the detector’s millions of channels. Its software will run on a farm of computers, and with an average of 10 microseconds elapsing between each bunch approved by the level-one trigger, it will have enough time to “reconstruct” each event. In other words, it will project tracks back to common points of origin and thereby form a coherent set of data—energies, momenta, trajectories, and so on—for the particles produced by each event.

The higher-level trigger passes about 100 events per second to the hub of the LHC’s global network of computing resources—the LHC Computing Grid. A grid system combines the processing power of a network of computing centers and makes it available to users who may log in to the grid from their home institutes.

The LHC’s grid is organized into tiers. Tier 0 is at CERN itself and consists in large part of thousands of commercially bought computer processors, both PC-style boxes and, more recently, “blade” systems similar in dimensions to a pizza box but in stylish black, stacked in row after row of shelves. Computers are still being purchased and added to the system. Much like a home user, the people in charge look for the ever moving sweet spot of most bang for the buck, avoiding the newest and most powerful models in favor of more economical options.

The data passed to Tier 0 by the four LHC experiments’ data-acquisition systems will be archived on magnetic tape. That may sound old-fashioned and low-tech in this age of DVD-RAM disks and flash drives, but François Grey of the CERN Computing Center says it turns out to be the most cost-effective and secure approach.

Tier 0 will distribute the data to the 12 Tier 1 centers, which are located at CERN itself and at 11 other major institutes around the world, including Fermilab and Brookhaven National Laboratory in the U.S., as well as centers in Europe, Asia and Canada. Thus, the unprocessed data will exist in two copies, one at CERN and one divided up around the world. Each of the Tier 1 centers will also host a complete set of the data in a compact form structured for physicists to carry out many of their analyses.

The full LHC Computing Grid also has Tier 2 centers, which are smaller computing centers at universities and research institutes. Computers at these centers will supply distributed processing power to the entire grid for the data analyses.

Rocky Road
With all the novel technologies being prepared to come online, it is not surprising that the LHC has experienced some hiccups—and some more serious setbacks—along the way. Last March a magnet of the kind used to focus the proton beams just ahead of a collision point (called a quadrupole magnet) suffered a “serious failure” during a test of its ability to stand up against the kind of significant forces that could occur if, for instance, the magnet’s coils lost their superconductivity during operation of the beam (a mishap called quenching). Part of the supports of the magnet had collapsed under the pressure of the test, producing a loud bang like an explosion and releasing helium gas. (Incidentally, when workers or visiting journalists go into the tunnel, they carry small emergency breathing apparatuses as a safety precaution.)

These magnets come in groups of three, to squeeze the beam first from side to side, then in the vertical direction, and finally again side to side, a sequence that brings the beam to a sharp focus. The LHC uses 24 of them, one triplet on each side of the four interaction points. At first the LHC scientists did not know if all 24 would need to be removed from the machine and brought aboveground for modification, a time-consuming procedure that could have added weeks to the schedule. The problem was a design flaw: the magnet designers (researchers at Fermilab) had failed to take account of all the kinds of forces the magnets had to withstand. CERN and Fermilab researchers worked feverishly, identifying the problem and coming up with a strategy to fix the undamaged magnets in the accelerator tunnel. (The triplet damaged in the test was moved aboveground for its repairs.)

In June, CERN director general Robert Aymar announced that because of the magnet failure, along with an accumulation of minor problems, he had to postpone the scheduled start-up of the accelerator from November 2007 to spring of this year. The beam energy is to be ramped up faster to try to stay on schedule for “doing physics” by July.

Although some workers on the detectors hinted to me that they were happy to have more time, the seemingly ever receding start-up date is a concern because the longer the LHC takes to begin producing sizable quantities of data, the more opportunity the Tevatron has—it is still running—to scoop it. The Tevatron could find evidence of the Higgs boson or something equally exciting if nature has played a cruel trick and given it just enough mass for it to show up only now in Fermilab’s growing mountain of data.

Holdups also can cause personal woes through the price individual students and scientists pay as they delay stages of their careers waiting for data.

Another potentially serious problem came to light in September, when engineers discovered that sliding copper fingers inside the beam pipes known as plug-in modules had crumpled after a sector of the accelerator had been cooled to the cryogenic temperatures required for operation and then warmed back to room temperature.

At first the extent of the problem was unknown. The full sector where the cooling test had been conducted has 366 plug-in modules, and opening up every one for inspection and possibly repair would have been terrible. Instead the team addressing the issue devised a scheme to insert a ball slightly smaller than a Ping-Pong ball into the beam pipe—just small enough to fit and be blown along the pipe with compressed air and large enough to be stopped at a deformed module. The sphere contained a radio transmitting at 40 megahertz—the same frequency at which bunches of protons will travel along the pipe when the accelerator is running at full capacity—enabling the tracking of its progress by beam sensors that are installed every 50 meters. To everyone’s relief, this procedure revealed that only six of the sector’s modules had malfunctioned, a manageable number to open up and repair.

When the last of the connections between accelerating magnets was made in November, completing the circle and clearing the way to start cooling down all the sectors, project leader Lyn Evans commented, “For a machine of this complexity, things are going remarkably smoothly, and we’re all looking forward to doing physics with the LHC next summer."