Sunday, November 16, 2008

The Future Will Be A Privatized Corporate Dystopia

By Trent Schlictmann

I beg to differ with my colleague. Having read the futuristic accounts of William Gibson, Neal Stephenson, and Philip K. Dick, the path our future shall take will be bleak, indeed–but in a much different way.

When the ongoing trend of corporate mergers reaches critical mass in 2030, the scant handful of corporations that remain will be too powerful to resist and will ultimately supplant all government. National borders will crumble, replaced by warring corporate armies who deploy vat-grown Yakuza assassins to take down enemy CEOs in the name of commerce.

The future will be every color but gray–not that the future will be worth living in. Giant videoscreen billboards will cover the exposed surface of every skyscraper, bombarding our consciousness with advertising for anything and everything. Looking up will expose us to giant orbiting mylar superscreens bearing more logos and slogans.

A citizen will be unable to walk down the street without encountering roving clouds made up of billions of microscopic nanoprobes that form corporate logos right before their very eyes.

Which is not to imply that the average citizen will do much walking: When every inch of space is privatized, it will cost money to walk from your living room to the kitchen. The average citizen will spend nearly all of his waking hours neurally jacked into the futuristic grandchild of the Internet, roaming cyberspace rather than moving and interacting in the inelegant, inconvenient three-dimensional world.

When we do log off the CyberNet, the very walls of our apartments will teem with droning media messages. Tolerating such in-home advertising will be the only way the average citizen will be able to afford an apartment at all. Only the wealthiest will be able to afford a quiet, dark room in which to sleep. The rest of us will simply become desensitized to the 24 hours of stimuli attacking our minds.

All media will consist of some form of advertising–print, audio, video–with some actually beamed directly into our brains. The theme song to every TV show will be a product jingle.

Newscasters will segue straight from war reports into soft-drink pitches without batting an eye.

To the powers that be, a citizen will be no more than a potential receptacle of consumption, only as valuable as his or her electronically catalogued personal wealth. All transactions will be conducted instantaneously by retinal scan, and credit fraud will be a crime worse than murder.

Oh, how I pity future generations.

The Future Will Be A Totalitarian Government Dystopia

By Timothy Geist

I am sad to say that for all our efforts in the name of freedom, the future shall be a bleak one, indeed. Such visionary authors as George Orwell and Robert Heinlein have mapped out the hellish future that awaits.

By the end of this century, the Earth will be controlled by a single unified world government–a government solely dedicated to perpetuating itself and keeping the populace under control. The first and greatest casualty of this New World Order shall be personal liberty.

Humans will live in identical, low-ceilinged, one-roomed concrete dwellings, outfitted with little more than a bed and a telescreen, arranged in endless grid patterns stretching to the horizon.

Our bleary-eyed descendants 100 years hence shall shuffle between their assigned tasks in gray, one-piece coveralls. What few possessions they enjoy will be meted out by the government, and even these spare trinkets will be small and inexpensive–a plastic comb, a morsel of chocolate, a new pair of shoes when the old ones have worn to unwearability.

Citizens will be assigned to various vocational fields, the most common being propaganda, bureaucracy, and the police. Those who perform with unerring loyalty will be rewarded with slightly larger dwellings and the right to lower the volume of their telescreens.

Unremovable electronic trackers will be implanted in our brains, monitoring our whereabouts and thoughts at all times. Citizens who harbor anti-authoritarian sentiments will be swiftly seized by jackbooted secret police and either put to death–a procedure filmed and displayed via telescreen as a grim warning to other would-be dissenters–or rehabilitated into blind servitude through torture and brainwashing.

Food will be prepared by machines and served in drab public mess halls. No fruits and vegetables for future-man: Every meal will be a flavorless, grainy paste designed to provide just enough nutrition to sustain life and nothing more–any more energy and the powers that be risk rebellion.

Oh, how I dread the future. May God protect our yet-unborn children.

Sunday, August 31, 2008

Study says eyes evolved for X-Ray vision

The advantage of using two eyes to see the world around us has long been associated solely with our capacity to see in 3-D. Now, a new study from a scientist at Rensselaer Polytechnic Institute has uncovered a truly eye-opening advantage to binocular vision: our ability to see through things.

Most animals — fish, insects, reptiles, birds, rabbits, and horses, for example — exist in non-cluttered environments like fields or plains, and they have eyes located on either side of their head. These sideways-facing eyes allow an animal to see in front of and behind itself, an ability also known as panoramic vision.

Humans and other large mammals — primates and large carnivores like tigers, for example — exist in cluttered environments like forests or jungles, and their eyes have evolved to point in the same direction.

While animals with forward-facing eyes lose the ability to see what's behind them, they gain X-ray vision, according to Mark Changizi, assistant professor of cognitive science at Rensselaer, who says eyes facing the same direction have been selected for maximizing our ability to see in leafy environments like forests.

All animals have a binocular region — parts of the world that both eyes can see simultaneously — which allows for X-ray vision and grows as eyes become more forward facing.

Demonstrating our X-ray ability is fairly simple: hold a pen vertically and look at something far beyond it. If you first close one eye, and then the other, you'll see that in each case the pen blocks your view. If you open both eyes, however, you can see through the pen to the world behind it.

To demonstrate how our eyes allow us to see through clutter, hold up all of your fingers in random directions, and note how much of the world you can see beyond them when only one eye is open compared to both. You miss out on a lot with only one eye open, but can see nearly everything behind the clutter with both.

"Our binocular region is a kind of 'spotlight' shining through the clutter, allowing us to visually sweep out a cluttered region to recognize the objects beyond it," says Changizi, who is principal investigator on the project. "As long as the separation between our eyes is wider than the width of the objects causing clutter — as is the case with our fingers, or would be the case with the leaves in the forest — then we can tend to see through it."

To identify which animals have this impressive power, Changizi studied 319 species across 17 mammalian orders and discovered that eye position depends on two variables: the clutter, or lack thereof in an animal's environment, and the animal's body size relative to the objects creating the clutter.

Changizi discovered that animals in non-cluttered environments — which he described as either "non-leafy surroundings, or surroundings where the cluttering objects are bigger in size than the separation between the animal's eyes" (think a tiny mouse trying to see through 6-inch wide leaves in the forest) — tended to have sideways-facing eyes.

"Animals outside of leafy environments do not have to deal with clutter no matter how big or small they are, so there is never any X-ray advantage to forward-facing eyes for them," says Changizi. "Because binocular vision does not help them see any better than monocular vision, they are able to survey a much greater region with sideways-facing eyes."

However, in cluttered environments — which Changizi defined as leafy surroundings where the cluttering objects are smaller than the separation between an animal's eyes — animals tend to have a wide field of binocular vision, and thus forward-facing eyes, in order to see past leaf walls.

"This X-ray vision makes it possible for animals with forward-facing eyes to visually survey a much greater region around themselves than sideways-facing eyes would allow," says Changizi. "Additionally, the larger the animal in a cluttered environment, the more forward facing its eyes will be to allow for the greatest X-ray vision possible, in order to aid in hunting, running from predators, and maneuvering through dense forest or jungle."

Changizi says human eyes have evolved to be forward facing, but that we now live in a non-cluttered environment where we might actually benefit more from sideways-facing eyes.

"In today's world, humans have more in common visually with tiny mice in a forest than with a large animal in the jungle. We aren't faced with a great deal of small clutter, and the things that do clutter our visual field — cars and skyscrapers — are much wider than the separation between our eyes, so we can't use our X-ray power to see through them," Changizi says.

"If we froze ourselves today and woke up a million years from now, it's possible that it might be difficult for us to look the new human population in the eyes, because by then they might be facing sideways."

Changizi's research was completed in collaboration with Shinsuke Shimojo at the California Institute of Technology, and is published online in the Journal of Theoretical Biology.

Source: Rensselaer Polytechnic Institute

Saturday, August 23, 2008

Cooking and Cognition: How Humans Got So Smart

After two tremendous growth spurts — one in size, followed by an even more important one in cognitive ability — the human brain is now a lot like a teenage boy.

It consumes huge amounts of calories, is rather temperamental and, when harnessed just right, exhibits incredible prowess. The brain's roaring metabolism, possibly stimulated by early man's invention of cooking, may be the main factor behind our most critical cognitive leap, new research suggests.


About 2 million years ago, the human brain rapidly increased its mass until it was double the size of other primate brains.

"This happened because we started to eat better food, like eating more meat," said researcher Philipp Khaitovich of the Partner Institute for Computational Biology in Shanghai.
But the increase in size, Khaitovich continued, "did not make humans as smart as they are today."

The early shift

For a long time, we were pretty dumb. Humans did little but make "the same very boring stone tools for almost 2 million years," he said. Then, only about 150,000 years ago, a different type of spurt happened — our big brains suddenly got smart. We started innovating. We tried different materials, such as bone, and invented many new tools, including needles for beadwork. Responding to, presumably, our first abstract thoughts, we started creating art and maybe even religion.

To understand what caused the cognitive spurt, Khaitovich and colleagues examined chemical brain processes known to have changed in the past 200,000 years. Comparing apes and humans, they found the most robust differences were for processes involved in energy metabolism.

The finding suggests that increased access to calories spurred our cognitive advances, said Khaitovich, carefully adding that definitive claims of causation are premature.
The research is detailed in the August 2008 issue of Genome Biology.

The extra calories may not have come from more food, but rather from the emergence of pre-historic "Iron Chefs;" the first hearths also arose about 200,000 years ago.

In most animals, the gut needs a lot of energy to grind out nourishment from food sources. But cooking, by breaking down fibers and making nutrients more readily available, is a way of processing food outside the body.

Eating (mostly) cooked meals would have lessened the energy needs of our digestion systems, Khaitovich explained, thereby freeing up calories for our brains.
Instead of growing even larger (which would have made birth even more problematic), the human brain most likely used the additional calories to grease the wheels of its internal functioning.

Digestion question

Today, humans have relatively small digestive systems and burn 20-25 percent of their calories running their brains. For comparison, other vertebrate brains use as little as 2 percent of the animal's caloric intake.

Does this mean renewing our subscriptions to Bon Appetit will make our brains more efficient? No, but we probably should avoid diving into the raw food movement. Devoted followers end up, said Khaitovich, "with very severe health problems."

Scientists wonder if our cognitive spurt happened too fast. Some of our most common mental health problems, ranging from depression and bipolar disorder to autism and schizophrenia, may be by-products of the metabolic changes that happened in an evolutionary "blink of an eye," Khaitovich said.

While other theories for the brain's cognitive spurt have not been ruled out (one involves the introduction of fish to the human diet), the finding sheds light on what made us, as Khaitovich put it, "so strange compared to other animals."

Saturday, June 28, 2008

Lights in the sky

William B Stoecker:

Debunkers are fond of "explaining" ufos as ball lightning, swamp gas, etc., but, inasmuch as we have next to no understanding of such phenomena, perhaps it would be more accurate to say that, for example, ball lightning is a ufo.

After all, the term"ufo" doesn't mean Pleiadian beam ship or Sirian mother ship; it means "unidentified flying object," and is an admission of the fact that there are objects or phenomena flying around in the sky that we don't even begin to understand.

In fact, there are a multitude of luminous phenomena in and beyond our atmosphere that we don't even begin to understand, and they all seem to be interrelated in some fashion; indeed, they may be different forms of the same thing.For centuries, people have reported seeing moving balls of light, usually in swampy areas; these have been called swamp lights, will-o-the-wisp, or jack-o-lanterns.

Debunkers claim that these are nothing more than glowing or burning methane gas released by rotting vegetation in swamps. While there is no doubt that methane is produced this way, there is no known chemical process that would cause methane to coldly luminesce. And for it to burn, there would have to be something to ignite the gas, and most swamps don't come equipped with spark plugs.

If bubbles of methane were somehow ignited, they would simply produce a brief flame just above the water level. Yet scores of witnesses have reported discrete balls of light moving through the air, and holding themselves together for many seconds. Burning gas expands, rises, and cools. Ever turn on the gas range in your kitchen and have to chase a fireball around the room? I think not. In addition, many observers have reported a seeming awareness on the part of the lights; they seem to react to a human presence.

And then there is so called ball lightning. This has been reported by so many reliable witnesses that mainstream science has had to admit that it exists. Tesla and others have even produced what appears to be ball lightning in laboratories, or from lit candles in microwaves (don't try this at home), and researchers have produced similar effects by hitting silicon with an electric arc, leading some to suspect that natural ball lightning may be caused by ordinary lightning hitting sandy soil, but the lights they produced were only a few centimeters in diameter, and lasted, at the most, for one or two seconds.

By contrast, witnesses have reported natural ball lightning over a meter in diameter and lasting many seconds. It has been reported entering sealed aircraft cabins and closed rooms. At a Church in Devon, England on 10/21/1638, witnesses reported an eight foot fireball that killed four people and injured sixty. The problem with trying to explain it is that a ball of hot, glowing gas, or plasma, would, like the burning methane discussed earlier, tend to expand, rise, and cool very quickly.

But not only do we not understand ball lightning (if, indeed, it is lightning), we don't fully understand ordinary lightning. In addition, satellites and high flying aircraft have photographed "red sprites," dim, red flashes as high as ninety kilometers with filaments extending down toward thunderclouds, often in clusters. Also photographed are blue jets that shoot up as narrow cones from the tops of thunderclouds.

The sprites somehow produce gamma ray bursts, and no one has any explanation for any of this.Earthquake lights are another phenomenon that mainstream science admits is real, but cannot explain. Sometimes the lights are seen before or after the quake, and are typically white or bluish glows like an aurora. There are also reports of glowing spheres, or what appear to be flames coming from the ground.

Usually they are silent, but sometimes a crackling sound is reported, indicating that they might be electrical in nature. Theories include the release of methane from the ground, but, if so, what ignites it? Another theory is that ball lightning is an electrical discharge due to the piezoelectric effect (quartz crystals subjected to sudden impact or pressure produce electricity), but ball lightning has been reported at sea.

How would electric discharges find their way through hundreds of feet of sea water? Also, if the reports of glowing balls are correct, we are faced with the same problem as with swamp lights. What powers the glowing ball, and what holds it together? Also, earthquakes have caused electromagnetic disturbances and low frequency radio emissions in the 10-20 KHz range. This, too, is a mystery.

Michael Persinger, a psychologist at Laurentian University in Sudbury, Ontario, is the organizer of the Behavioral Neuroscience Program. He has stimulated the temporal lobes of volunteers with weak magnetic fields, causing them to sense a "presence" in the room. This has led him to theorize that ufos are earthquake lights, and that the earthquake-caused electromagnetic disturbances somehow cause people to hallucinate full-fledged abductions.

It is quite a stretch from sensing a "presence" to the kind of experience alleged abductees report. In addition, abduction experiences usually happen when there is no earthquake, and often far from any fault lines. Persinger has theorized that even between quakes there may be strain on the rocks inside the Earth, and calls this the Tectonic Strain Theory.

Chris Rutkowski of the University of Manitoba has pointed out that common household electrical appliances subject people to far stronger electric and magnetic fields than have been produced by earthquakes, but do not cause hallucinations.British researcher Michael Devereux, like Persinger, believes that lights may appear even when there is no earthquake, and he has organized the Dragon Project to study what he calls "earthlights."

At a very few ancient ruins he has detected magnetic and radiation anomalies, and possibly infrared and ultrasound effects. Note that crop circles are also commonly found near ancient ruins in England. Devereux has come to suspect that the lights are in some sense alive or aware, possessing at least some level of intelligence.Videographer Jose Escamilla taped what he calls "rods" at Midway, New Mexico on 3/19/94 and many times since. He thinks that researchers like Ivan T. Sanderson and Trevor James Constable may have photographed them in the late nineteen fifties, and Constable used infrared film.

Basically, the rods appear to be some sort of bizarre flying creatures with rod-shaped bodies and projecting fins, that move through the air, and apparently through water and even solid rock at such speed that they are normally invisible to the human eye. When videotape is played back in extreme slow motion, they often appear. Before anyone accuses Escamilla of a hoax, consider the fact that dozens of other people have also taped them.

In one case, researchers taped some nocturnal moths, and, when the tape was played back in slow motion, the moths took on the appearance of rods due to a doubling effect inherent to videocameras. But many other rods have offset fins in multiples of three, an effect impossible to achieve with insects, whose wings are in multiples of two.

Escamilla points out that a rod photographed in Maryland appears to be partly behind a cloud, indicating that, whatever it was, it was colossal in size. In many of these videotapes, the cameras appear to be focused on infinity, making it difficult to explain the rods as insects very close to the camera. Escamilla and other researchers have also reported a seeming awareness on the part of the rods; often they swerve to avoid people.

I have discussed with Mr. Escamilla the possibility that the larger rods may be identical with at least some of the flying dragons of folklore. The idea that ufos, or some of them, may be some sort of bizarre life form, is very old. Whatever they are, they are worthy of further study, and, as common as they seem to be, studying them shouldn't be all that difficult.It should also be fairly easy to study another phenomenon reported in recent years: orbs.

When electronic cameras are used with a flash attachment, fairly often glowing balls of light appear, that were not visible at the time to the photographer or other people present when the picture was taken. Naturally, the manufacturers claim that these are not artifacts of the cameras caused by some defect. Debunkers claim that they are merely dust motes in the air. Perhaps so, but they appear in some pictures and not in others and seem to have no connection to the amount of dust present.

Researcher Paul J. Muir claims to have photographed them near ancient English ruins, or sacred sites, apparently making them identical to Devereux's earthlights. At some of these sites he claims to have triangulated their position using two cameras, apparently ruling out the dust mote theory, and, like Devereux, he has reported magnetic anomalies at some of the sites, and says that the orbs are electrically charged.

Note that these reports of electromagnetic anomalies seem common to many of these various phenomena, and have also been reported by ufo researchers.If orbs and rods would seem to be easy to study, the same is true of mysterious lights that show up over and over at the same locations. One of these would be Marfa, in West Texas, where balls of light, usually reddish in color, flashing off and on rapidly, have been seen by numerous observers, and also photographed and videotaped.

Debunkers claim that they are distant car headlights, but they are seen in directions away from the roads, and, anyway, were reportd in 1883 before there was even a rail line in the area, let alone any automobiles. As with all of these phenomena, most professional scientists refuse to investigate them, leaving that task mostly to amateurs, whom the professionals then disparage. It might be useful to get detailed seismic and geological data on the area, to see if the lights cluster around fault lines or near certain minerals or ground water.

Acoustic data should be collected, as some observers have reported a high pitched hum. Infrared and low light cameras should be used, and geiger counters and ultraviolet detectors, and, although the lights are only reported at night, that doesn't mean that they are not present by day, only invisible.Similar lights have been reported on and around Brown Mountain Ridge in the Appalachian Mountains of North Carolina, dating back to pre-Civil War and Indian accounts, which would appear to rule out the convenient car headlight theory.

They are seen floating among the trees and sometimes larger lights split into several smaller ones, something also reported with ufos.The League of Energy Materialization and Unexplained Phenomena Research, or LEMUR, have detected them with infrared and moving across rock faces. They have activated geiger counters, but this may be due, not to radiation, but to the ionization of the air. LEMUR has also measured electrical currents moving through the ground, and radio emissions in the 140 KHz range.

They appear to be more common in times of high solar activity. There are thrust faults in the region, and caves and springs, and quartz and magnetite are common minerals around the ridge. One man claimed to have touched one and received an electric shock. All of this suggests something electrical in nature, but what? Note that caves and springs are reportedly also the site of many paranormal events.Another of the many areas where mysterious lights are reported is Hessdalen, Norway.

Linda Moulton Howe and some Norwegian researchers have done a fairly detailed study of these lights. They have come to believe that about ninety five percent are not solid objects, but plasmas emitting low frequency radio, and varying in brightness and size but not in temperature. The researchers have somehow determined that some five percent of the lights contain solid objects, polished spheres up to forty centimeters in diameter.

Some are visible only in infrared, large lights have been seen emitting smaller ones, and they seem to have a complex structure of many small components vibrating around. A ufo of the flying triangle type has also been reported at Hessdalen. In the town of Silver Cliff, Colorado, there are two cemetaries on Mill Street about a mile south of town, where numerous observers have reported a multitude of small lights in many colors flying around, and even following people.

This certainly suggests some kind of awareness, and the cemetary location suggests that the lights may be paranormal in nature, but no one knows for sure. Again, it would be easy for professional scientists to do a detailed study, if any of them cared enough to bother.And then there is Toppenish Ridge on the Yakima Indian Reservation in Washingtom State, just east of the volcanic Cascade Mountains and in an area with several fault lines. For decades, many, many witnesses have reported flashing red and white lights and a few blue or orange ones moving around; often they move jerkily, or back and forth.

Fire lookouts, people whose observations generally have to be trusted, have often reported them. Much original work was done by David Akers on behalf of Dr. J. Allen Hynek, the astronomer who evolved from a ufo debunker into a ufo investigator. He found no magnetic anomalies.Clearly we are dealing, at the very least, with a natural phenomenon or several phenomena, that challenge our basic understanding of physics.

If these things are alive or aware in some way, the implications are disturbing. Could they be a bizarre life form? Or are they paranormal entities, challenging the current atheist/materialist paradigm? Or are we wrong to draw a line between life forms and the paranormal? Are they perhaps part of a continuum, or is all life in some sense paranormal?

And, since these are, by definition, ufos, does that mean that all ufos are a strange natural phenomenon, or bizarre life forms, or manifestations of the paranormal? Or are some of them spacecraft of unknown origin, while others are paranormal? Why do we assume that all ufos are essentially the same thing? Things may be more complex and strange than we can understand.

Wednesday, June 11, 2008

The Emergence of Virtual Sex

From Virtual Sex to No Sex?

An inquiry from a journalist about the phenomenon of sex in the virtual world Second Life got me waxing eloquent about a topic interwoven with my Cyborg Buddha book project: the future of sex.

Here is my thesis: the two most important developments in the technological control of sex are both already occurring; first separating sex from physical contact, and then establishing our control over our sexual feelings altogether.

Sex is already moving in a virtual direction, between widespread access to and use of porn, phone sex, video-interactive sex, sex in virtual worlds, and eventually teledildonics, the use of body suits and tactile equipment controlled from afar.

Electronically mediated sex and porn are safer (no diseases or pregnancy), easier (lengthy courtship and foreplay are unnecessary), more convenient (available any time you are) and more likely to be exactly what you want (your partners can be anyone, or anything, you desire, without any physical defects).

The virtualization of sex has progressed from the first erotic paintings and photographs to sex in Second Life. Teledildonics is the next step, and it has been around since the early days of the Web. But the equipment has been so crude that it has not provided a very interesting experience for many. In about ten years however I’m sure that Wii-sex will be quite popular.

The growing sophistication of AI and robotics to detect human emotion, anticipate human desires and respond in ways that simulate a human response will also speed the virtualization of sex. People who are too busy, shy, or unappealing, or whose preferences are too elaborate or taboo to reveal to a living person, may turn to robot sex as an alternative.

Of course, we will have a serious problem of robot rights if and when machine minds achieve true self-awareness - perhaps a problem of apocalyptic proportions - and this would effect robot sex like everything else. (It would be bad if the first god-like AI was a former sex slave.)

Lots of people are horrified that virtual sex and porn are reducing desire for and tolerance for physical sex, especially with spouses or partners. But I think that this is first a matter of individual preference; many will still prefer body sex. The decline in physical sex will also soon be overcome by neurotechnologies that control and channel sexual desire.

Soon, in addition to Viagra, we will have chemicals that increase and channel desire itself. Right now we can chemically castrate pedophiles and turn off their obsessive thoughts about children, along with all of their sex drive. We can stimulate sexual desire in men and women by increasing their testosterone. We can increase feelings of trust and bonding with oxytocin.

Eventually we will be able to directly stimulate the parts of the brain that desire specific partners or experiences. In the future we will be able to specifically turn off sexual thoughts about children, and turn on appropriate sexual thoughts about adults. We will be able to make gays straight, and straights gay, and everything in between. There will be no more necessity for sexual boredom between long term partners.

We will be able to wire ourselves to only desire sex with our spouses, to only desire it in-body, and to desire it according to an agreed upon frequency. Or we can turn off our jealousy, and turn up our libidos, if we have agreed to a polyamorous lifestyle.

When we have our brains laced with nano-neural networks (40 years?) we will eventually be able to experience completely virtual body sensation, so we can have equal or better quality sex with partners in virtual reality, or with combinations of virtual reality and material reality; two real people in a virtual space, a virtual partner in a real space, two real and one virtual person in a semi-real space, whatever.

Nano-neural networks and new psychopharmaceuticals will also allow us to modify and enhance sexual and emotional experience, to have orgasms as long and hard as we like, or no orgasms, or to have an experience of cosmic love and oneness instead of an orgasm, experienced as a bolt of tingles through every inch of our body.

Also, as we gain complete control over the neurochemistry of sex, love and bonding we can make conscious, explicit choices about our feelings and desires. Just as we have prenuptial contracts for property, partners may agree to lock their love and sexual desire onto their partners for a specified period, or at least go to marital counseling to have adulterous feelings modified.

This technology will also be a huge boon for celibate religious orders, who will be able to turn off their mendicants’ sexual feelings. (Perhaps not taking your celibacy pill will be the mark of true self-flagellant.)

I suspect that as the range of sexual choices expand, and the potential for sexual addiction grows, a lot of people will adopt either strict monogamy or even celibacy, channeling all that energy into other pursuits.

The challenge will be to remain a liberal society as the birth rate drops and the risk of virtual sexual obsessions grows. These neurotechnological controls over sexuality could enable new forms of Puritanism and repression in authoritarian societies, “curing" homosexuals and enforcing monogamy on people against their will.

We will have to work hard to defend cognitive liberty and sexual liberalism against the forces of repression, partly by developing the means for people to control and channel their own sexuality. The debates over the limits on sex in virtual worlds is only the beginning.

Written by:
James Hughes Ph.D., the IEET Executive Director, is a bioethicist and sociologist at Trinity College in Hartford Connecticut USA. He is author of Citizen Cyborg and is working on a second book tentatively titled Cyborg Buddha. He produces a syndicated weekly radio program, Changesurfer Radio.

Wednesday, June 4, 2008

Key to All Optical Illusions Discovered

Humans can see into the future, says a cognitive scientist. It's nothing like the alleged predictive powers of Nostradamus, but we do get a glimpse of events one-tenth of a second before they occur.

And the mechanism behind that can also explain why we are tricked by optical illusions.
Researcher Mark Changizi of Rensselaer Polytechnic Institute in New York says it starts with a neural lag that most everyone experiences while awake. When light hits your retina, about one-tenth of a second goes by before the brain translates the signal into a visual perception of the world.

Scientists already knew about the lag, yet they have debated over exactly how we compensate, with one school of thought proposing our motor system somehow modifies our movements to offset the delay.

Changizi now says it's our visual system that has evolved to compensate for neural delays, generating images of what will occur one-tenth of a second into the future. That foresight keeps our view of the world in the present. It gives you enough heads up to catch a fly ball (instead of getting socked in the face) and maneuver smoothly through a crowd. His research on this topic is detailed in the May/June issue of the journal Cognitive Science.

Explaining illusions

That same seer ability can explain a range of optical illusions, Changizi found.
"Illusions occur when our brains attempt to perceive the future, and those perceptions don't match reality," Changizi said.

Here's how the foresight theory could explain the most common visual illusions — geometric illusions that involve shapes:

Something called the Hering illusion, for instance, looks like bike spokes around a central point, with vertical lines on either side of this central, so-called vanishing point. The illusion tricks us into thinking we are moving forward, and thus, switches on our future-seeing abilities. Since we aren't actually moving and the figure is static, we misperceive the straight lines as curved ones.

"Evolution has seen to it that geometric drawings like this elicit in us premonitions of the near future,” Changizi said. "The converging lines toward a vanishing point (the spokes) are cues that trick our brains into thinking we are moving forward — as we would in the real world, where the door frame (a pair of vertical lines) seems to bow out as we move through it — and we try to perceive what that world will look like in the next instant."

Grand unified theory

In real life, when you are moving forward, it's not just the shape of objects that changes, he explained. Other variables, such as the angular size (how much of your visual field the object takes up), speed and contrast between the object and background, will also change.

For instance, if two objects are about the same distance in front of you, and you move toward one of the objects, that object will speed up more in the next moment, appear larger, have lower contrast (because something that is moving faster gets more blurred), and literally get nearer to you compared with the other object.

Changizi realized the same future-seeing process could explain several other types of illusions. In what he refers to as a "grand unified theory," Changizi organized 50 kinds of illusions into a matrix of 28 categories. The results can successfully predict how certain variables, such as proximity to the central point or size, will be perceived.

Changizi says that finding a theory that works for so many different classes of illusions is "a theorist's dream."

Most other ideas put forth to explain illusions have explained one or just a few types, he said. The theory is "a big new player in the debate about the origins of illusions," Changizi told LiveScience. "All I'm hoping for is that it becomes a giant gorilla on the block that can take some punches."

Monday, May 12, 2008

Modern technology is changing the way our brains work

By: Susan Greenfield

Human identity, the idea that defines each and every one of us, could be facing an unprecedented crisis.

It is a crisis that would threaten long-held notions of who we are, what we do and how we behave. It goes right to the heart - or the head - of us all.

This crisis could reshape how we interact with each other, alter what makes us happy, and modify our capacity for reaching our full potential as individuals.

And it's caused by one simple fact: the human brain, that most sensitive of organs, is under threat from the modern world.

Unless we wake up to the damage that the gadget-filled, pharmaceutically-enhanced 21st century is doing to our brains, we could be sleepwalking towards a future in which neuro-chip technology blurs the line between living and non-living machines, and between our bodies and the outside world.

It would be a world where such devices could enhance our muscle power, or our senses, beyond the norm, and where we all take a daily cocktail of drugs to control our moods and performance.
Already, an electronic chip is being developed that could allow a paralysed patient to move a robotic limb just by thinking about it.

As for drug manipulated moods, they're already with us - although so far only to a medically prescribed extent.

Increasing numbers of people already take Prozac for depression, Paxil as an antidote for shyness, and give Ritalin to children to improve their concentration.

But what if there were still more pills to enhance or "correct" a range of other specific mental functions?

What would such aspirations to be "perfect" or "better" do to our notions of identity, and what would it do to those who could not get their hands on the pills? Would some finally have become more equal than others, as George Orwell always feared?

Of course, there are benefits from technical progress - but there are great dangers as well, and I believe that we are seeing some of those today.

I'm a neuroscientist and my day-to-day research at Oxford University strives for an ever greater understanding - and therefore maybe, one day, a cure - for Alzheimer's disease.
But one vital fact I have learnt is that the brain is not the unchanging organ that we might imagine.

It not only goes on developing, changing and, in some tragic cases, eventually deteriorating with age, it is also substantially shaped by what we do to it and by the experience of daily life.

When I say "shaped", I'm not talking figuratively or metaphorically; I'm talking literally.

At a microcellular level, the infinitely complex network of nerve cells that make up the constituent parts of the brain actually change in response to certain experiences and stimuli.

The brain, in other words, is malleable - not just in early childhood but right up to early adulthood, and, in certain instances, beyond.

The surrounding environment has a huge impact both on the way our brains develop and how that brain is transformed into a unique human mind.

Of course, there's nothing new about that: human brains have been changing, adapting and developing in response to outside stimuli for centuries.

What prompted me to write my book is that the pace of change in the outside environment and in the development of new technologies has increased dramatically.

This will affect our brains over the next 100 years in ways we might never have imagined.
Our brains are under the influence of an ever- expanding world of new technology: multichannel television, video games, MP3 players, the internet, wireless networks, Bluetooth links - the list goes on and on.

But our modern brains are also having to adapt to other 21st century intrusions, some of which, such as prescribed drugs like Ritalin and Prozac, are supposed to be of benefit, and some of which, such as widelyavailable illegal drugs like cannabis and heroin, are not.

Electronic devices and pharmaceutical drugs all have an impact on the micro- cellular structure and complex biochemistry of our brains. And that, in turn, affects our personality, our behaviour and our characteristics. In short, the modern world could well be altering our human identity.

Three hundred years ago, our notions of human identity were vastly simpler: we were defined by the family we were born into and our position within that family. Social advancement was nigh on impossible and the concept of "individuality" took a back seat.

That only arrived with the Industrial Revolution, which for the first time offered rewards for initiative, ingenuity and ambition.

Suddenly, people had their own life stories - ones which could be shaped by their own thoughts and actions. For the first time, individuals had a real sense of self.

But with our brains now under such widespread attack from the modern world, there's a danger that that cherished sense of self could be diminished or even lost.

Anyone who doubts the malleability of the adult brain should consider a startling piece of research conducted at Harvard Medical School.

There, a group of adult volunteers, none of whom could previously play the piano, were split into three groups.

The first group were taken into a room with a piano and given intensive piano practise for five days. The second group were taken into an identical room with an identical piano - but had nothing to do with the instrument at all.

And the third group were taken into an identical room with an identical piano and were then told that for the next five days they had to just imagine they were practising piano exercises.
The resultant brain scans were extraordinary. Not surprisingly, the brains of those who simply sat in the same room as the piano hadn't changed at all.

Equally unsurprising was the fact that those who had performed the piano exercises saw marked structural changes in the area of the brain associated with finger movement.

But what was truly astonishing was that the group who had merely imagined doing the piano exercises saw changes in brain structure that were almost as pronounced as those that had actually had lessons.

"The power of imagination" is not a metaphor, it seems; it's real, and has a physical basis in your brain.

Alas, no neuroscientist can explain how the sort of changes that the Harvard experimenters reported at the micro-cellular level translate into changes in character, personality or behaviour.
But we don't need to know that to realise that changes in brain structure and our higher thoughts and feelings are incontrovertibly linked.

What worries me is that if something as innocuous as imagining a piano lesson can bring about a visible physical change in brain structure, and therefore some presumably minor change in the way the aspiring player performs, what changes might long stints playing violent computer games bring about?

That eternal teenage protest of 'it's only a game, Mum' certainly begins to ring alarmingly hollow.

Already, it's pretty clear that the screen-based, two dimensional world that so many teenagers - and a growing number of adults - choose to inhabit is producing changes in behaviour.

Attention spans are shorter, personal communication skills are reduced and there's a marked reduction in the ability to think abstractly.

This games-driven generation interpret the world through screen-shaped eyes. It's almost as if something hasn't really happened until it's been posted on Facebook, Bebo or YouTube.

Add that to the huge amount of personal information now stored on the internet - births, marriages, telephone numbers, credit ratings, holiday pictures - and it's sometimes difficult to know where the boundaries of our individuality actually lie.

Only one thing is certain: those boundaries are weakening.

And they could weaken further still if, and when, neurochip technology becomes more widely available.

These tiny devices will take advantage of the discovery that nerve cells and silicon chips can happily co-exist, allowing an interface between the electronic world and the human body.

One of my colleagues recently suggested that someone could be fitted with a cochlear implant (devices that convert sound waves into electronic impulses and enable the deaf to hear) and a skull-mounted micro- chip that converts brain waves into words (a prototype is under research).

Then, if both devices were connected to a wireless network, we really would have arrived at the point which science fiction writers have been getting excited about for years. Mind reading!
He was joking, but for how long the gag remains funny is far from clear.

Today's technology is already producing a marked shift in the way we think and behave, particularly among the young.

I mustn't, however, be too censorious, because what I'm talking about is pleasure. For some, pleasure means wine, women and song; for others, more recently, sex, drugs and rock 'n' roll; and for millions today, endless hours at the computer console.

But whatever your particular variety of pleasure (and energetic sport needs to be added to the list), it's long been accepted that 'pure' pleasure - that is to say, activity during which you truly "let yourself go" - was part of the diverse portfolio of normal human life. Until now, that is.

Now, coinciding with the moment when technology and pharmaceutical companies are finding ever more ways to have a direct influence on the human brain, pleasure is becoming the sole be-all and end-all of many lives, especially among the young.

We could be raising a hedonistic generation who live only in the thrill of the computer-generated moment, and are in distinct danger of detaching themselves from what the rest of us would consider the real world.

This is a trend that worries me profoundly. For as any alcoholic or drug addict will tell you, nobody can be trapped in the moment of pleasure forever. Sooner or later, you have to come down.

I'm certainly not saying all video games are addictive (as yet, there is not enough research to back that up), and I genuinely welcome the new generation of "brain-training" computer games aimed at keeping the little grey cells active for longer.

As my Alzheimer's research has shown me, when it comes to higher brain function, it's clear that there is some truth in the adage "use it or lose it".

However, playing certain games can mimic addiction, and that the heaviest users of these games might soon begin to do a pretty good impersonation of an addict.

Throw in circumstantial evidence that links a sharp rise in diagnoses of Attention Deficit Hyperactivity Disorder and the associated three-fold increase in Ritalin prescriptions over the past ten years with the boom in computer games and you have an immensely worrying scenario.
But we mustn't be too pessimistic about the future.

It may sound frighteningly Orwellian, but there may be some potential advantages to be gained from our growing understanding of the human brain's tremendous plasticity.

What if we could create an environment that would allow the brain to develop in a way that was seen to be of universal benefit?

I'm not convinced that scientists will ever find a way of manipulating the brain to make us all much cleverer (it would probably be cheaper and far more effective to manipulate the education system).

And nor do I believe that we can somehow be made much happier - not, at least, without somehow anaesthetising ourselves against the sadness and misery that is part and parcel of the human condition.

When someone I love dies, I still want to be able to cry.

But I do, paradoxically, see potential in one particular direction.

I think it possible that we might one day be able to harness outside stimuli in such a way that creativity - surely the ultimate expression of individuality - is actually boosted rather than diminished.

I am optimistic and excited by what future research will reveal into the workings of the human brain, and the extraordinary process by which it is translated into a uniquely individual mind.

But I'm also concerned that we seem to be so oblivious to the dangers that are already upon us.

Well, that debate must start now. Identity, the very essence of what it is to be human, is open to change - both good and bad. Our children, and certainly our grandchildren, will not thank us if we put off discussion much longer.

Wednesday, April 30, 2008

Uploading, Self-Transformation & Sexual Engineering

What is the lossiness of the uploading process? Physically, are we uploading the function of hormones along with that of the neurons, neurotransmitters, etc? What about the thymus, pituitary, testes or ovaries, pancreas, digestive tract, sexual organs, etc.?

The point where psychology leaves off and irrelevant physical detail begins is not clear, and even the most intellectually motivated person may be irrevocably disoriented if these kind of biological processes and motivations are removed suddenly.

If we upload our neurons, and leave behind the hormones, does that give us 90% of our previous identity, or 10%? I don't think the answer is at all clear. Much as we take pride in our intellect most humans spend most of our time at the dictates of our hormones: eating, sleeping, seeking sex.

We might call the loss of biological substrate upon uploading the problem of dangling sexuality (so to speak :-), or more generally phantom biology, or phantom motivation. I refer to the phenomenon of "phantom limbs" in which pain etc. are felt in a limb even after it has been lost.

The big problem here is that an uploader may be left with no motivation at all. Why expand memory or CPU? Why search for better algorithms? Why explore the universe and put it to use?

Why not commit suicide or go hide in some archive until the universe ends? Without extropian motivations, there is no clear reason for or against doing any of these. And it's not just a matter if abstractly wanting to do these things; it's a matter of being hungry for them, of lusting after them, of falling head over heals in love with them.

Uploaders will likely face stiff competition: military/hacking competition from mature a-life and/or experienced uploaders, economic competition from evolving trader bots, etc.

If the uploader spends excessive CPU cycles simulating glands and hormones, recreating 3D landscapes and living out old sexual fantasies, etc. they may quickly go bankrupt. Depending on the rules of the uploader PPL, they may have their memory garbage collected and be filed away into a museum archive, may be merged into other consciousness (cf. proposal to auction off organs of bankrupt people in biological PPL), etc.

So even if the capability exists to simulate sexuality, hunger, taste for music, and other old dominant motivations in a minimally lossy way, competition may require these quickly be dumped overboard in favor of motivations that allow the agent to survive and grow in the new environment.

By the same token we can step back and examine what is valuable in our own environment, here today. We seem to live in an environment that is very forgiving wrt our forebears, and perhaps may be very forgiving compared to the world of a-life and uploader.

But is our own environment really that forgiving? Isn't it an incredibly great great loss when people die, for good, because they lack money for life extension and cryonic suspension, and do not make what might be called "semi-connected backups" (children, long-lived memes)?

Our current decisions and motivations are quite important, and even in today's environment biology may lead us astray. Consider the time we spend on work, recreation, entertainment, sexuality, eating, listening to music, etc. Do we do what is most extropic, or do we do what biology and culture have led us to want to do?

How can we transform ourselves into a more extropian state?

So, we see that the impedance of biology is not unique to uploaders; even today we can start the task of self-transformation from biological motivations suitable for a hunter-gatherer tribe, to extropian motivations suitable for today. Taking this a step further, this leads us to ways of avoiding a sudden change in connectedness upon uploading.

Well before we begin the physical process of uploading, we can begin the psychological process of uploading. This requires that we anticipate the uploader environment, and what it will take for us to succeed there, and transform ourselves in that direction even as we remain in our biological bodies. By the time we upload, we should be already well-transformed; psychologically prepared to do combat in cyberspace. Such transformation might include:

Fetish engineering: moving our sexuality away from biological and towards economic or information-resource triggers. We may be able learn from precedents (eg monks), but mostly we will be breaking new ground. Anti-aphrodisiacs or training on fetish objects with aphrodisiacs may help here.

Finding fetishes that suitably motivate abstract goals such as learning, life extension, child bearing and raising, expansion of the bank account, etc. may not be easy. Fetish engineering may be the most important and productive form of self-transformation, if we can pull it off.

Aesthetic shift: retrain ourselves to get aesthetic pleasure from technological or economic accomplishment, eg hacking good code or making a good deal in the market, and less from similar but less useful aesthetic pastimes like music or good cooking.

It may be necessary to develop very sophisticated tastes in music, cooking, etc. before such transformation is possible; or perhaps the opposite is true, that sophistication in music or cooking detracts from technological or economic sophistication.

It's important that we resolve these issues; otherwise we won't know whether we're transforming towards or away from a consciousness suited to our environment, or ready for uploading.

Thirst, hunger, satiation, and taste: In the uploader, these need to be linked to new resources: taste buds to sense, and software to respond to memory and CPU cycles and power sources instead of fats and proteins and carbohydrates. Today, many of us already find these biological motivations excessive, to the point of distracting us from intellectual tasks and even being downright unhealthy (eg overweight lowers life expectancy).

This suggests also empirical benchmarks, telling us how far we've progressed on the road from human to transhuman to posthuman. How much of our motivational energy, or more measurably how many hours per day, are spent chasing obsolete biological ends? How much are spent in ways that would be beneficial to an uploader? If we spend more of our time at the latter, we might truly call ourselves transhuman; if our schedule is dominated by what would also concern the uploader, then we have reached posthumanity.

All this may sound very cold & dry: trying to turn sophisticated biological & culture tastes into cold & abstract mechanisms. Far from it! The fact that we find the machines cold & abstract is a problem endemic to humans; the transhuman task is developing tastes for the new resources that not only rival our current _haute cuisine_, sexual skills and romantic subtleties, but go beyond them in both sophisticated elegance and raw powerful lust.

It may also sound quite perverted, engineering fetishes in place of "normal, healthy" sexuality. But we already live in a world where the genetic goal of sex is short-circuited by birth control, sex is perverted by pornography, brood-care perverted by pets and dolls, etc. Yesterday's perversion can be tommorrow's route to success or failure.

Political correctness, social norms, and our current personal tastes bear no relation to the outcome unless they have either emerged to a state of rationality, or been designed rationally -- and even then are subject to continual obsolescence as culture evolves around us. Putting these genies back in the bottle is futile.

The idea here is to sort out our perversions, to figure out which are extropic and which entropic, and use both perversions and self-discipline to transform ourselves into ever greater heights of extropy.

Nick Szabo

Sunday, April 27, 2008

Humanity's close shave with Extinction

It was a very close call.

If a devastating drought that gripped Africa had lasted just a little longer, or been a little worse, we would not be here today.

There would be no humans, no cities, no art and no science. There would be no wars and no human-induced climate change. The world would belong to the animals.

An international genetics project has found that modern humans almost became extinct 70,000 years ago.

The Genographic Project, led by American and Israeli researchers, made the discovery after undertaking the most extensive mitochondrial DNA survey ever undertaken in Africa.In 1987 a study of mitochondrial DNA, passed down the generations via the maternal line, revealed that every person alive today is descended from one woman who lived in Africa 200,000 years ago.

The latest study shows that after the birth of humanity in eastern Africa, people quickly split into separate communities.

About 150,000 years ago humans, possibly pursuing animal herds, moved to settle throughout Africa. The number of people soared, peaking somewhere between 10,000 and 100,000.

But before the first person could venture out of Africa, the population suddenly crashed to just 2000.

"It could have been even fewer," said Spencer Wells, the Genographic Project director. "We were, in effect, hanging on by our fingertips."That's fewer people than there are Sumatra orang-utans today, and they are classified as extremely endangered and will probably go extinct in 20 years."

The crisis was probably caused by climate change. About 130,000 years ago the world started cooling and drying as it neared another ice age."There were massive droughts in Africa ... mega-droughts," Dr Wells said. With much of the continent barren and hostile, the tiny human settlements became isolated from each other.

As humanity hovered on the edge of extinction "a shift in culture began. People began making the better hunting tools they needed to survive the drought. Art makes its appearance. There is abstract thought," he said.

Then the drought broke. Isolated communities migrated and merged. With better skills and a friendlier climate, the population boomed again and people finally left Africa, spreading along the Asian coast, towards Australia.

Backed by National Geographic and IBM, the researchers, who have published their findings in The American Journal Of Human Genetics, identified humans' near demise after studying DNA mutation rates.

"By sampling people alive today, estimating how much genetic variation they have ... and knowing the rate at which variation accumulates we can say how long it took to accumulate the observed level of variation, and the size of the starting population," Dr Wells said.

The project aimed to discover what humans were doing before leaving Africa.

"Three quarters of our history is virtually unknown," Dr Wells said. The research showed "there was lots going on".

He believes humanity's close shave should send a message to the 6.6 billion people alive today. "We should start to see ourselves as the lucky survivors."

Saturday, April 26, 2008

Probability of Advanced Life Forms in our Galaxy

It is almost an article of faith among many ufologists that, of course, the ufos are piloted by beings from distant star systems.

Some go further and make sweeping statements about Pleiadean beamships and Sirian motherships.Sirius and all the stars in the Pleiades cluster are, by the way, only a fraction of the age of our Earth and Sun.

But let us address two questions. Is there anyone out there, and, if so, could they come here? The answer to the first question, of course, is that we don't know for sure, and, while it seems likely in such a vast universe that someone is out there, there are reasons to believe that technological civilizations are few and far between.

As to the second question, most certainly they could come here, if they exist and have sufficient wealth (interstellar flight is likely to be an expensive undertaking) and are sufficiently advanced.

As discussed in another article, even if faster than light speeds or "warp drive" prove impossible, and even if the tenuous interstellar medium, composed mainly of hydrogen, makes near light speed travel impossible, immense "space arks," rotating to produce artificial gravity and containing entire ecosystems, with cities and farms, would allow "manned" interstellar flight, although such voyages could take centuries or even millions of years, and generations would be born and die on the way.

But the first question is more difficult. Are they out there? Astronomers estimate the number of stars in our galaxy at 100 to 400 billion, say 200 billion, and they estimate that there may be 100 billion other galaxies in our universe. Bear in mind that without some kind of "warp drive" intergalactic voyages are problematic at best.

Our galaxy is believed to have a dense nucleus and six spiral arms, with a halo of dense globular star clusters surrounding the nucleus. We are thought to be about a third of the way out on what is called the Orion Arm, and our galaxy is believed to be about 100,000 light years across. So, at first glance, you would expect us to have plenty of company.

Not so fast. Most of the stars are packed into the globular clusters and the dense central regions of the galaxy. Most of these are so closely surrounded by other stars that their planets would have no night, so bright would the starlight be.

Even if they were not too hot from all that energy, any life forms there, or at least advanced life forms, would be periodically wiped out by nearby supernovae. These are hostile environments.

In addition, our Sun is a second generation population one star, with enough of the elements heavier than hydrogen and helium, or enough "metallicity" as astronomers put it, to have solid planets like the Earth end elements like carbon and oxygen for life as we know it. The first stars, population three, apparently no longer exist, and the later population two stars are still so poor in heavier elements that they could not have solid planets or carbon based life forms.

So out of our 200 billion stars, we have perhaps 20 billion that might have advanced life forms. Bear in mind that this is just an educated guessing game, an updated version of Drake's Equation, formulated in 1961 by Dr. Frank Drake.Stars much more massive than our Sun fuse their hydrogen more rapidly and go off the Hertzsprung Russell diagram's main sequence sooner; that is, they become unstable, and, depending on their mass, either swell up into red giants or explode as supernovae.

Either way, their planets are destroyed and any life is wiped out. It has taken us four point six billion years to "evolve," and, while as an advocate of intelligent design I can't say it is impossible for civilizations to develop in only one or two billion years or less, it seems unlikely. Stars much less massive than our Sun, red dwarves, are unstable.

To receive enough light and warmth for life, a planet would have to orbit very close to its star (one such system has recently been discovered) where it would periodically be blasted by solar flares.

Also, the steep tidal gradient would probably induce heating of the planet's interior (like Jupiter's moon Io), which, added to the heat already generated in the interior of any near Earth sized planet, would cause too much vulcanism for anything to survive.

Eventually the planet would become tide locked, like our own Moon, with one side always burned by its parent star and the other side frozen in eternal night. Being very generous, we are now down to about two billion stars with the right luminosity.

Most stars are multiples, with two or three or (rarely) more stars orbiting a common center of gravity. If the two stars of a binary system are either very close together or very far apart, one or both might have a planet with a stable orbit receiving the right amount of energy.Otherwise, no dice.

This brings us down to perhaps a billion stars. But it turns out that, far from us being the new kids on the block, our Sun is one of the oldest population one stars. So now we are down to perhaps a hundred million stars with habitable planets old enough for civilizations to have developed.

Since our Sun, like all stars, slowly gets hotter as it ages, it will destroy all life long before it goes off the main sequence.Also, the craters we see on every solid world in our Solar System show that we are in a shooting gallery, and it is just pure luck (or Divine Providence) that we have not been destroyed by asteroid or cometary impacts, and that we "evolved" before our Sun became too hot.

So now we are down to maybe ten million stars suitable for advanced life forms. If a tenth of those have advanced civilizations on their planets, we would have the company of one million worlds out of a galaxy of 200 billion stars.

That is one in 200 thousand.

Bear in mind that most of the solar systems we have yet discovered are very different from our own, with gas giants orbiting very close to their stars, and that many astronomers believe that our Moon, formed by Earth's freakish and improbable collision with a large planetismal, may be essential to keeping our axial inclination stable enough for us to live here.

Still, a million is not bad. But where are they?

The SETI people, who mock us ufologists who have nothing but radar and visual sightings and videotape, have been listening for radio messages from other planets for decades. They have checked hundreds of stars on many wave lengths, and have never yet detected a single message.

Consider this.

If only one star in a million in our galaxy, or 200,000, had a planet with a technological culture, and only one in a hundred of those was broadcasting radio, that would be 2,000 message senders.

If only one in ten of those was within radio range, we should be listening to 200 other civilizations.

So where are they?

It is beginning to look as though we don't have very much company out there after all. Perhaps the ufos come from much, much closer to home.

William B Stoecker

Friday, April 18, 2008

Bruce Effect

The Bruce effect is a form of pregnancy disruption in mammals in which exposure of a female to an unknown male results in pre- (Bruce 1959) or postimplantation failure.

Some form of pregnancy block or disruption has been reported in the laboratory for at least 12 species of rodents, including domestic mice, Mus musculus; deer mice, Peromyscus; and voles, Microtus.

The basic design of these experiments is that a recently inseminated female is exposed directly to an unfamiliar, nonsire male or to its urine or soiled bedding, which in turn causes her to prevent implantation or to abort or reabsorb her embryos. Pregnancy disruption may occur at any time from conception to 17 days postmating, depending on the species and experimental conditions.

Variables such as length of exposure, timing of exposure to a strange male, sexual experience, and behavior of strange males may all influence the degree of pregnancy failure. The overall implication is that some level of exposure to strange males disrupts normal pregnancy in female rodents. This response supposedly is adaptive for the male, in that termination of pregnancy results in the female coming into estrus within 1 to 4 days, providing the male with a mating opportunity.

The benefit to the female is less clear, but if the strange male were to commit infanticide and kill her offspring after parturition, a female could conserve reproductive effort by aborting her current litter and mating with the new male. Thus, pregnancy block, or termination of pregnancy, supposedly evolved as a female counterstrategy to infanticide by males.

The Bruce effect has not been demonstrated outside the laboratory, and does not occur in wild grey voles, so it might be a laboratory artifact.

Bateman's Principle

In biology, Bateman's principle is the theory that females almost always invest more energy into producing offspring than males, and therefore in most species females are a limiting resource over which the other sex will compete.

Typically it is the females who have a relatively larger investment in producing each offspring. A single male can easily fertilize all a female's eggs: she will not produce more offspring by mating with more than one male. A male is capable of fathering more offspring than any (one) female can bear, if he mates with several females.

By and large, a male's reproductive success increases with each female he mates with, whereas a female's reproductive success is not increased nearly as much by mating with more males. This results in sexual selection, in which males compete with each other, and females become choosy in which males to mate with.

Bateman's observations came from his empirical work on mating behaviour in fruit flies. He attributed the origin of the unequal investment to the differences in the production of gametes: sperm are cheaper than eggs. Animals are therefore fundamentally polygynous, as a result of being anisogamous.

"A female can have only a limited number of offspring, whereas a male can have a virtually unlimited number, provided that he can find females willing to mate with him. Thus females generally need to be much choosier about who they mate with." --Caspar Hewett, 2003

"A male can easily produce sperm in excess of what it would take to fertilize all the females that could conceivably be available. Hence the development of the masculine emphasis on courtship and territoriality or other forms of conflict with competing males." --Williams, 1966.
"in most animals the fertility of the female is limited by egg production which causes a severe strain on their nutrition.

In mammals the corresponding limiting factors are uterine nutrition and milk production, which together may be termed the capacity for rearing young. In the male, however, fertility is seldom likely to be limited by sperm production but rather by the number of inseminations or the number of females available to him...

In general, then, the fertility of an individual female will be much more limited than the fertility of a male... This would explain why in unisexual organisms there is nearly always a combination of an undiscriminating eagerness in the males and a discriminating passivity in the females." --Bateman, 1948.

"among polygynous species, the variance in male reproductive success is likely to be greater than the variance in female reproductive success." --Huxley, 1938.

"The female, with the rarest exceptions, is less eager than the male... she is coy, and may often be seen endeavouring for a long time to escape." --Darwin, 1871.

Tuesday, April 15, 2008

Study dampens hopes of finding E.T.

Advanced ground and space-based telescopes are discovering new planets around other stars almost daily, but an environmental scientist from England believes that even if some of those planets turn out to be Earthlike, the odds are very low they'll have intelligent inhabitants.

In a recent paper published in the journal Astrobiology, Professor Andrew Watson of the University of East Anglia describes an improved mathematical model for the evolution of intelligent life as the result of a small number of discrete steps.

Evolutionary step models have been used before, but Watson (a Fellow of England's Royal Society who studied under James Lovelock, inventor of the "Gaia hypothesis") sees a limiting factor: The habitability of Earth (and presumably, other living worlds) will end as the sun brightens.

Like most stars, as it progresses along the main sequence, the sun's output increases (it is believed to be about 25 percent brighter now than when Earth formed). Within at most 1 billion years, this will raise Earth's average temperature to 122 degrees Fahrenheit (50 degrees Celsius), rendering the planet uninhabitable.

Four major stepsApplying the limited lifespan to a stepwise model, Watson finds that approximately four major evolutionary steps were required before an intelligent civilization could develop on Earth.

These steps probably included the emergence of single-celled life about half a billion years after the Earth was formed, multicellular life about a billion and a half years later, specialized cells allowing complex life forms with functional organs a billion years after that, and human language a billion years later still.

Several of these steps agree with major transitions that have been observed in the geological record.

Watson estimates the overall probability that intelligent life will evolve as the product of the probabilities of each of the necessary steps.

In his model, the probability of each evolutionary step occurring in any given epoch is 10 percent or less, so the total probability that intelligent life will emerge is quite low (less than 0.01 percent over 4 billion years). Even if intelligent life eventually emerges, the model suggests its persistence will be relatively short by comparison to the lifespan of the planet on which it developed.

The mathematical methods Watson used assume that each evolutionary step is independent of the others, though they must occur in sequence. Watson considers this "a reasonable first approximation for what is, after all, a very idealized sort of model, deliberately simplified enough that the math can be solved analytically."

Watson also suggests that some of the critical steps may have changed the biosphere irreversibly.

The development of photosynthetic plants, for example, led to an oxygen atmosphere, which was a necessary precursor to the development of complex land animals. Once this transition occurred, any further evolutionary step would have to take place in an oxygen atmosphere, which may have limited opportunities for non oxygen-breathing life to evolve.

Watson says in the conclusion to his paper: " ... only on those rare planets on which complex creatures happen to evolve can there exist observers who ask questions about evolution and care about the answers."

Asked if an advanced, space-faring civilization might be able to survive the brightening of its star by migrating off the planet where it evolved, Watson agrees that's possible: "the model predicts only when 'intelligence' can arise based on the time available. Once the observers exist, they might do all manner of things to find new places to live."

Seth Shostak, senior astronomer at the SETI Institute, had this comment on Watson's work: "We have, of course, only one example of intelligent life (indeed, of life of any type). That means we cannot possibly estimate from this single instance what is the probability of life on other worlds unless we are completely confident we understand all the relevant evolutionary processes.

Watson argues that intelligent life will be dismayingly rare: There is no way to prove that is true. On the other hand, if the converse is the case — if the galaxy is home to many intelligences — that is amenable to proof. We should do the experiment."

Wednesday, April 2, 2008

The Real Matrix

By Glenn Campbell

In the movie The Matrix, we discover that life as we know it is a computer generated virtual reality illusion. The "real" reality is something much bleaker and more desperate, where a few escaped humans are fighting an oppressive machine that is using people as an energy source. The illusion itself is called "the Matrix," and once you escape from it, you can never go back, at least with the same naive perceptions.

This is the perfect metaphor for life as we know it. Most of us are still living in the Matrix, being fed soothing illusions by our social environment and mass media. The Matrix gives us distorted information about life and tries to make us believe that the world is happy place. When we break out of the Matrix, we see a much darker universe where hardly anything is going right. Once we see our delusions for what they are, it is hard to go back to believing in them.

The true requirements of life are simple: food, health, self-regulation and meaningful interaction with others. The social Matrix we are living in piles all sorts of useless products on top of this, like fashion that doesn't make you attractive, entertainment that doesn't entertain, and illusions of a perfect life that can never be attained. The Matrix sets you up with goals and assumptions that may be entirely out of line with how the world really works and what really makes you happy.

All of this serves the needs of the machine but not necessarily our own needs.

It serves the goals of the machine to create an illusion of normative happiness. Social and Capitalist forces want you to believe that life is basically good and wholesome, with only a few minor problems, like underarm odor and an absence of full-flavored taste in your beer or cigarettes. Lo and behold, most of the problems identified by the machine can be solved by purchasing the right product: a new car, perhaps, or even a religion.

The Matrix is a very trivial place, obsessed with insignificant things like sports, sitcoms and the scandals of celebrities. The Matrix gives us Martha Stewart, fly fishing and a million different ways you can waste time before you die. The Matrix encourages you to fiddle while Rome burns.

If you aren't personally happy, then it must be your fault. You must not have purchased the right product. The illusion of normative happiness make us feel even worse when misfortune befalls us. "This isn't supposed to happen," we say, yet tragedy does happen, and nothing in the Matrix has prepared us for it. When you suffer, you usually have to do it alone, because no one else within the matrix has been trained to deal with it.

If we encounter a problem that isn't readily solvable, like mental illness, crime or world hunger, the Matrix tell us that this is an anomaly. Ninety-nine percent of life is fine, we think; it is only this one percent that doesn't seem to working out so well.

Naturally, we start looking around for some sort of simple product to solve that nagging one percent. Maybe we need more Capitalism to solve the world hunger problem, and maybe we should to put a gun into every citizen's hand to take care of crime. Life is happy, you understand, and all that is bothering us is a few solvable problems.

Once you break out of the Matrix, you see the opposite: Ninety-nine percent of the world is painful and desperate, wasting human resources on a huge scale. There are only a few little islands of happiness, where people are working well with each other and individual potential is close to being achieved. Everyone else is enslaved.

We don't take much notice of the true bleakness of the world because routine tragedy doesn't get much press. Pessimism doesn't sell commercial products, only false optimism does. Even our parents gave us sugar-coated fairy tales about the world because it was much easier to raise us that way. People who are fed a steady stream of pleasant delusions and simplistic goals are easier to manage. It is like giving them drugs to keep them subdued.

In the real world, human lives are wasted on a massive, production scale. Nearly every baby starts out with great promise, but very few adults fulfill it. Somewhere between infancy and adulthood, the spirit and creativity of most people on Earth are crushed. Instead of attaining something approaching their potential, most people are turned into Soylent Green—dumb food for other people.

You don't have to go to industrial China to find broken, exploited and wasted humanity. It is all around us. Maybe we are one of the wasted. Maybe we are not achieving our own potential because of the social situation we have found ourselves in or because of our own unachievable delusions of what life should be.

Given the option, which are we going to seek: real internal satisfaction based on our own experience or the product-based delusion of satisfaction as fed to us by the machine? Usually, the machine wins.

Breaking out of the Matrix, we discover that the problems of the world are massive and essentially unsolvable, at least by any power that we personally possess. Most people, even close to us, are living lives of either acute pain, numbing servitude or mindless delusion.

The family next door, we may know full well, is psychologically abusing their child and will turn him into a screwed up adult, but we may also recognize that there is little we can do about it. There is little we can do about most of the suffering of the world, because there is so much of it and our own powers are so limited.

Breaking out of the Matrix means seeing, for the first time, all of the rich, polychromatic suffering of the world and not flinching from it. This planet is a horrible place, and we have landed in the middle of it. It is like the science fiction story about the psychic who can read people's minds but can't turn it off. He feels all of the suffering of millions and often wishes that he didn't have that power.

Seeing all of the pain of the world doesn't mean you have to go mad. It just requires a different perspective. If there is far more suffering than you can do anything about, this can actually be liberating.

When you were locked in the Matrix and you saw on TV that some family or group was suffering, you felt compelled to help, because that suffering was seen as an unusual event—a disruption of your happy view of the world.

If you now recognize that suffering is everywhere, most of it never seen on TV, then you also have to realize that you can't address all of it. You have to be selective and intelligent in the way that you help and not just blindly donate to the number on your screen.

Breaking out of the Matrix lifts a veil from your eyes and gives you vision. You can now recognize social delusions for what they are: sales messages to serve the needs of others. You can now see that tragedy is everywhere and that there is little you can do about most of it, so you help where you can and sleep comfortably when you can't.

Without your delusions, you see that your own place in the world is very weak and fragile. All you really have is a few little slivers of discretion. With them, you can try to build your own tentative happiness and rescue that tiny portion of the world that you have some control over.

Tuesday, April 1, 2008

Simulated Reality- An Interesting Perspective

By JOHN TIERNEY

Until I talked to Nick Bostrom, a philosopher at Oxford University, it never occurred to me that our universe might be somebody else’s hobby. I hadn’t imagined that the omniscient, omnipotent creator of the heavens and earth could be an advanced version of a guy who spends his weekends building model railroads or overseeing video-game worlds like the Sims.

But now it seems quite possible. In fact, if you accept a pretty reasonable assumption of Dr. Bostrom’s, it is almost a mathematical certainty that we are living in someone else’s computer simulation.

This simulation would be similar to the one in “The Matrix,” in which most humans don’t realize that their lives and their world are just illusions created in their brains while their bodies are suspended in vats of liquid. But in Dr. Bostrom’s notion of reality, you wouldn’t even have a body made of flesh. Your brain would exist only as a network of computer circuits.

You couldn’t, as in “The Matrix,” unplug your brain and escape from your vat to see the physical world. You couldn’t see through the illusion except by using the sort of logic employed by Dr. Bostrom, the director of the Future of Humanity Institute at Oxford.

Dr. Bostrom assumes that technological advances could produce a computer with more processing power than all the brains in the world, and that advanced humans, or “posthumans,” could run “ancestor simulations” of their evolutionary history by creating virtual worlds inhabited by virtual people with fully developed virtual nervous systems.

Some computer experts have projected, based on trends in processing power, that we will have such a computer by the middle of this century, but it doesn’t matter for Dr. Bostrom’s argument whether it takes 50 years or 5 million years. If civilization survived long enough to reach that stage, and if the posthumans were to run lots of simulations for research purposes or entertainment, then the number of virtual ancestors they created would be vastly greater than the number of real ancestors.

There would be no way for any of these ancestors to know for sure whether they were virtual or real, because the sights and feelings they’d experience would be indistinguishable. But since there would be so many more virtual ancestors, any individual could figure that the odds made it nearly certain that he or she was living in a virtual world.

The math and the logic are inexorable once you assume that lots of simulations are being run. But there are a couple of alternative hypotheses, as Dr. Bostrom points out. One is that civilization never attains the technology to run simulations (perhaps because it self-destructs before reaching that stage). The other hypothesis is that posthumans decide not to run the simulations.

“This kind of posthuman might have other ways of having fun, like stimulating their pleasure centers directly,” Dr. Bostrom says. “Maybe they wouldn’t need to do simulations for scientific reasons because they’d have better methodologies for understanding their past. It’s quite possible they would have moral prohibitions against simulating people, although the fact that something is immoral doesn’t mean it won’t happen.”

Dr. Bostrom doesn’t pretend to know which of these hypotheses is more likely, but he thinks none of them can be ruled out. “My gut feeling, and it’s nothing more than that,” he says, “is that there’s a 20 percent chance we’re living in a computer simulation.”

My gut feeling is that the odds are better than 20 percent, maybe better than even. I think it’s highly likely that civilization could endure to produce those supercomputers. And if owners of the computers were anything like the millions of people immersed in virtual worlds like Second Life, SimCity and World of Warcraft, they’d be running simulations just to get a chance to control history — or maybe give themselves virtual roles as Cleopatra or Napoleon.

It’s unsettling to think of the world being run by a futuristic computer geek, although we might at last dispose of that of classic theological question: How could God allow so much evil in the world? For the same reason there are plagues and earthquakes and battles in games like World of Warcraft. Peace is boring, Dude.

A more practical question is how to behave in a computer simulation. Your first impulse might be to say nothing matters anymore because nothing’s real. But just because your neural circuits are made of silicon (or whatever posthumans would use in their computers) instead of carbon doesn’t mean your feelings are any less real.

David J. Chalmers, a philosopher at the Australian National University, says Dr. Bostrom’s simulation hypothesis isn’t a cause for skepticism, but simply a different metaphysical explanation of our world. Whatever you’re touching now — a sheet of paper, a keyboard, a coffee mug — is real to you even if it’s created on a computer circuit rather than fashioned out of wood, plastic or clay.

You still have the desire to live as long as you can in this virtual world — and in any simulated afterlife that the designer of this world might bestow on you. Maybe that means following traditional moral principles, if you think the posthuman designer shares those morals and would reward you for being a good person.

Or maybe, as suggested by Robin Hanson, an economist at George Mason University, you should try to be as interesting as possible, on the theory that the designer is more likely to keep you around for the next simulation.

Of course, it’s tough to guess what the designer would be like. He or she might have a body made of flesh or plastic, but the designer might also be a virtual being living inside the computer of a still more advanced form of intelligence. There could be layer upon layer of simulations until you finally reached the architect of the first simulation — the Prime Designer, let’s call him or her (or it).

Then again, maybe the Prime Designer wouldn’t allow any of his or her creations to start simulating their own worlds. Once they got smart enough to do so, they’d presumably realize, by Dr. Bostrom’s logic, that they themselves were probably simulations. Would that ruin the fun for the Prime Designer?

If simulations stop once the simulated inhabitants understand what’s going on, then I really shouldn’t be spreading Dr. Bostrom’s ideas. But if you’re still around to read this, I guess the Prime Designer is reasonably tolerant, or maybe curious to see how we react once we start figuring out the situation.

It’s also possible that there would be logistical problems in creating layer upon layer of simulations. There might not be enough computing power to continue the simulation if billions of inhabitants of a virtual world started creating their own virtual worlds with billions of inhabitants apiece.

If that’s true, it’s bad news for the futurists who think we’ll have a computer this century with the power to simulate all the inhabitants on earth. We’d start our simulation, expecting to observe a new virtual world, but instead our own world might end — not with a bang, not with a whimper, but with a message on the Prime Designer’s computer.

It might be something clunky like “Insufficient Memory to Continue Simulation.” But I like to think it would be simple and familiar: “Game Over.”

Wednesday, March 19, 2008

Form Constant


A form constant is one of several geometric patterns which are recurringly observed during hallucinations and altered states of consciousness.

In 1926, Heinrich Kluver systematically studied the effects of mescaline (peyote) on the subjective experiences of its users. In addition to producing hallucinations characterized by bright, "highly saturated" colors and vivid imagery, Kluver noticed that mescaline produced recurring geometric patterns in different users.

He called these patterns 'form constants' and categorized four types: lattices (including honeycombs, checkerboards, and triangles), cobwebs, tunnels, and spirals.

Many of these shapes have an intriguing similarity to much of the imagery in Ernst Haeckel's Kunstformen der Natur.

Kluver's form constants have appeared in other drug-induced and naturally-occurring hallucinations, suggesting a similar physiological process underlying hallucinations with different triggers. Kluver's form constants also appear in near-death experiences and hallucinations of those with synesthesia.

Other triggers include psychological stress, or threshold consciousness, hypnagogia, insulin hypoglycemia, the delirium of fever, epilepsy, psychotic episodes, advanced syphilis, sensory deprivation, photostimulation, electrical stimulation, crystal gazing, migraine headaches, dizziness and a variety of drug-induced intoxications.

These shapes may appear on their own or with eyes shut in the form of phosphenes, especially when exerting pressure against the closed eyelid.

Author Michael Moorcock once observed in print that the shapes he had seen during his migraine headaches resembled exactly the form of fractals. The diversity of conditions that provoke such patterns suggests that form constants reflect some fundamental property of visual perception.
The practice of the ancient art of divination may suggest a deliberate practice of cultivating form constant imagery and applying the brain's intuitive faculty and/or imagination to derive some meaning from transient visual phenomena.

Many religions represent geometric and/or repetitive forms as indicative of the divine, particularly in a starburst pattern.

Examples include mandalas, yantras (both of these specifically designed to evoke certain mental states), Islamic art and cathedral architecture.

Psychedelic art, inspired at least in part by psychedelic substances, frequently includes repetitive abstract forms and patterns such as tessellation, Moire patterns or patterns similar to those created by paper marbling, and, in later years, fractals. The op art genre of visual art created art using bold imagery very like that of form constants.

Exploding head syndrome

Exploding head syndrome is a condition first reported by a British physician in 1988 that causes the sufferer to occasionally experience a tremendously loud noise as if from within his or her own head, usually described as an explosion, roar or a ringing noise.

This usually occurs within an hour or two of falling asleep, but is not the result of a dream and can happen during the day as well.

Although perceived as tremendously loud, the noise is usually not accompanied by pain. Attacks appear to increase and decrease in frequency over time, with several attacks occurring in a space of days or weeks followed by months of remission.

Sufferers often feel a sense of fear and anxiety after an attack, accompanied by elevated heart rate. Attacks are also often accompanied by perceived flashes of light (when perceived on their own, known as a "visual sleep start") or difficulty in breathing.

The condition is also known as "auditory sleep starts." It is not thought to be dangerous, although it is sometimes distressing to experience.

Note that exploding head syndrome does not involve the head actually exploding.

The cause of exploding head syndrome is not known, though some physicians have reported a correlation with stress or extreme fatigue. The condition may develop at any time during life and women are slightly more likely to suffer from it than men. Attacks can be one-time events, or can recur.

The mechanism is also not known, though possibilities have been suggested; one is that it may be the result of a sudden movement of a middle ear component or of the eustachian tube, another is that it may be the result of a form of minor seizure in the temporal lobe where the nerve cells for hearing are located.

Electroencephalograms recorded during actual attacks show unusual activity only in some sufferers, and have ruled out epileptic seizures as a cause.

Hypnic jerk

A hypnic or hypnagogic jerk is an involuntary muscle twitch (commonly known as a myoclonic twitch) which occurs during the transition into hypnagogia.

It is often described as an electric shock or falling sensation, and can cause movement of the body in bed. Hypnic jerks are experienced by most people, especially when exhausted or sleeping uncomfortably.

Hypnic jerks are usually felt once or twice per night. More regular, and usually less intense, hypnic jerks often occur during normal sleep. In extreme cases, this may be classified as a disorder called periodic limb movement. The person with the disorder will usually sleep through the events.

Although the ultimate cause of the hypnic jerk is unknown, a common hypothesis is that the brain misinterprets relaxation as the sleeping primate falling out of a tree.

When a subject is deprived of sleep and is trying to fight sleep, hypnic jerks can occur more often. This normally happens to subjects who have deprived themselves of sleep for longer than 24 hours, or to those who have recently woken up from insufficient amounts of sleep.