Subject Related Articles by Lloyd Pye:
Recommend this website to your friends:
Helping Hand? Copyright 2004 by World-Mysteries.com
HUMAN ORIGINS: CAN WE HANDLE THE TRUTH?
by Lloyd Pye
Since writing my first essay for NEXUS in mid-2002 [see 9/04], I've been
bombarded by emails (nearing 200) from around the world, many offering
congratulations (always appreciated, of course) and many others requesting more
instruction or deeper insight into areas discussed and/or not discussed.
Let's face it: nearly everyone is interested in Darwinism, Creationism,
Intelligent Design, and the new kid in town, Interventionism. Because of length
constraints, this essay must be in two parts. Here, in Part One, I'll go over
the basics currently known about the origin of life on Earth. Later, in Part
Two, I'll discuss what is known and what can be safely surmised about the origin
We begin by understanding that Charles Darwin stood on a very slippery slope
when trying to explain how something as biologically and biochemically complex
as even the simplest form of life could have spontaneously generated itself from
organic molecules and compounds loose in the early Earth's environment. Because
that part of Darwin's theory has always been glaringly specious, modern
Darwinists get hammered about it from all sides, including from the likes of me,
with a net result that the edifice of "authority" they've hidden behind for 140
years is crumbling under the assault.
Imagine a mediaeval castle being pounded by huge stones flung by primitive,
but cumulatively effective, catapults. Darwinism (and all that term has come to
represent: natural selection, evolution, survival of the fittest, punctuated
equilibrium, etc.) is the castle; Darwinists man the battlements as the lobbed
stones do their work; Intelligent Designers hurl the boulders doing the most
damage; Creationists, by comparison, use slings; and the relatively few (thus
far) people like me, Interventionists, shoot a well-aimed arrow now and then,
though nobody pays much attention to usÉyet.
Remember, a well-aimed (or lucky--in either case, the example is instructive)
arrow took down mighty Achilles. Darwinists have heels, too.
LIFE, OR SOMETHING LIKE IT
In Charles Darwin's time, nothing was known about life at the cellular level.
Protoplasm was the smallest unit they understood. Yet Darwin's theory of natural
selection stated that all of life--every living entity known then or to be
discovered in the future--simply had to function from birth to death by "natural
laws" that could be defined and analysed. This would of course include the
origin of life. Darwin suggested life might have gradually assembled itself from
stray parts lying about in some "warm pond" when the planet had cooled enough to
make such an assemblage possible. Later it was realised that nothing would
likely have taken shape (gradually or otherwise) in a static environment, so a
catalytic element was added: lightning.
Throughout history up to the present moment, scientists have been forced to
spend their working lives with the "God" of the Creationists hovering over every
move they make, every mistake, every error in judgment, every personal
peccadillo. So when faced with something they can't explain in rational terms,
the only alternative option is "God did it", which for them is unacceptable. So
they're forced by relentless Creationist pressure to come up with answers for
absolutely everything that, no matter how absurd, are "natural". That was their
motivation for the theory that a lightning bolt could strike countless random
molecules in a warm pond and somehow transform them into the first living
creature. The "natural" forces of biology, chemistry and electromagnetism could
magically be swirled together--and voilà!Éan event suspiciously close to a
Needless to say, no Darwinist would accept terms like "magic" or "miracle",
which would be tantamount to agreeing with the Creationist argument that "God
did it all". But in their heart-of-hearts, even the most fanatical Darwinists
had to suspect the "warm pond" theory was absurd.
And as more and more was learned about the mind-boggling complexity of
cellular structure and chemistry, there could be no doubt. The trenchant Fred
Hoyle analogy still stands: it was as likely to be true as that a tornado could
sweep through a junkyard and correctly assemble a jetliner.
Unfortunately, the "warm pond" had become a counterbalance to "God did it",
so even when Darwinists knew past doubt that it was wrong, they clung to it,
outwardly proclaimed it and taught it. In many places in the world, including
the USA, it's still taught.
TOO HOT TO HANDLE
The next jarring bump on the Darwinist road to embattlement came when they
learned that in certain places around the globe there existed remnants of what
had to be the very first pieces of the Earth's crust. Those most ancient slabs
of rock are called cratons, and the story of their survival for 4.0 billion
[4,000,000,000] years is a miracle in itself. But what is most miraculous about
them is that they contain fossils of "primitive" bacteria! Yes, bacteria,
preserved in 4.0-billion-year-old cratonal rock. If that's not primitive, what
is? However, it presented Darwinists with an embarrassing conundrum.
If Earth began to coalesce out of the solar system's primordial cloud of dust
and gas around 4.5 billion years ago (which by then was a well-supported
certainty), then at 4.0 billion years ago the proto-planet was still a seething
ball of cooling magma. No warm ponds would appear on Earth for at least a
billion years or more. So how to reconcile reality with the warm-pond fantasy?
There was no way to reconcile it, so it was ignored by all but the
specialists who had to work with it on a daily basis. Every other Darwinist
assumed a position as one of the "see no evil, speak no evil, hear no evil"
monkeys. To say they "withheld" the new, damaging information is not true; to
say it was never emphasised in the popular media for public consumption is true.
That has become the way Darwinists handle any and all challenges to their pet
theories: if they can no longer defend one, they don't talk about it, or they
talk about it as little as possible. If forced to talk about it, they invariably
try to "kill the messenger" by challenging any critic's "credentials". If the
critic lacks academic credentials equal to their own, he or she is dismissed as
little more than a crackpot. If the critic has equal credentials, he or she is
labelled as a "closet Creationist" and dismissed. No career scientist can speak
openly and vociferously against Darwinist dogma without paying a heavy price.
That is why and how people of normally good conscience can be and have been
"kept in line" and kept silent in the face of egregious distortions of truth.
If that system of merciless censure weren't so solidly in place, then surely
the next Darwinist stumble would have made headlines around the world as the
final and absolute end to the ridiculous notion that life could possibly have
assembled itself "naturally". They couldn't even be sure it happened on Earth.
TWO FOR THE PRICE OF ONE
The imposing edifice of Darwinian "origin of life" dogma rested on a piece of
incontrovertible bedrock: there could be only one progenitor for all of life.
When the fortuitous lightning bolt struck the ideally concocted warm pond, it
created only one entity. However, it was no ordinary entity. With it came the
multiple ability to take nourishment from its environment, create energy from
that nourishment, expel waste created by the use of that energy and (almost as
an afterthought) reproduce itself ad infinitum until one of its millions of
subsequent generations sits here at this moment reading these words. Nothing
miraculous about that; simply incalculable good fortune.
This was Darwinist gospel--preached and believed--until the bacteria fossils
were found in the cratons. Their discovery was upsetting, but not a deathblow to
the Darwinist theory. They had to concede (among themselves, of course) that the
first life-form didn't assemble itself in a warm pond, but it came together
somehow because every ancient fossil it spawned was a single-celled bacteria
lacking a cell nucleus (prokaryotes). Prokaryotes preceded the much later
single-celled bacteria with a nucleus (eukaryotes), so the post-craton situation
stayed well within the Darwinian framework. No matter how the first life-form
came into existence, it was a single unit lacking a cell nucleus, which was
mandatory because even the simplest nucleus would be much too "irreducibly
complex" (a favourite Intelligent Design phrase) to be created by a lightning
bolt tearing through a warm pond's molecular junkyard. So the Darwinists still
held half a loaf.
In the mid-1980s, however, biologist Carl Woese stunned his colleagues with a
shattering discovery. There wasn't just the predicted (and essential) single
source for all forms of life; there were two: two types of prokaryotic bacteria
as distinct as apples and oranges, dogs and cats, horses and cowsÉtwo distinct
forms of life, alive and well on the planet at 4.0 billion years ago.
Unmistakable. Irrefutable. Get over it. Deal with it.
But how? How to explain separate forms of life springing into existence in an
environment that would make hell seem like a summer resort? With nothing but
cooling lava as far as an incipient eye might have seen, how could it be
explained in "natural" terms? Indeed, how could it be explained in any terms
other than the totally unacceptable? Life, with all its deepening mystery, had
to have been seeded onto Earth.
PANSPERMIA RAISES ITS UGLY HEAD
Panspermia is the idea that life came to be on Earth from somewhere beyond
the planet and possibly beyond the solar system. Its means of delivery is
separated into two possible avenues: directed and undirected.
Undirected panspermia means that life came here entirely by accident and was
delivered by a comet or meteor. Some scientists favour comets as the prime
vector because they contain ice mixed with dust (comets are often referred to as
"dirty snowballs"), and life is more likely to have originated in water and is
more likely to survive an interstellar journey frozen. Other scientists favour
asteroids as the delivery mechanism because they are more likely to have come
from the body of a planet that would have contained life. A comet, they argue,
is unlikely ever to have been part of a planet, and life could not possibly have
generated itself in or on a frozen comet.
Directed panspermia means life was delivered to Earth by intelligent means of
one kind or another. In one scenario, a capsule could have been sent here the
same way we sent Voyager on an interstellar mission. However, if it was sent
from outside the solar system, we have to wonder how the senders might have
known Earth was here, or how Earth managed to get in the way of something sent
randomly (à la Voyager).
In another scenario, interstellar craft manned by extraterrestrial beings
could have arrived and delivered the two prokaryote types. This requires a level
of openmindedness that most scientists resolutely lack, so they won't accept
either version of directed panspermia as even remotely possible. Instead, they
cling to their "better" explanation of undirected panspermia because it allows
them to continue playing the "origin" game within the first boundaries set out
by Charles Darwin: undirected is "natural"; directed is "less natural".
Notice it can't be said that directed panspermia is "unnatural". According to
Darwinists, no matter where life originated, the process was natural from start
to finish. All they have to concede is that it didn't take place on Earth.
However, acknowledging that forces them to skirt dangerously close to admitting
the reality of extraterrestrial life, and their ongoing "search" for such life
generates millions in research funding each year. This leaves them in no hurry
to make clear to the general public that, yes, beyond Earth there is at the very
least the same primitive bacterial life we have here. There's no doubt about it.
But, as usual, they keep the lid on this reality, not exactly hiding it but
making no effort to educate the public to the notion that we are not, and never
have been, alone. The warm pond still holds water, so why muddy it with facts?
A PATTERN EMERGES
In my book, Everything You Know Is Wrong, I discuss all points mentioned up
to now, which very few people outside academic circles are aware of. Within
those circles, a hard core of "true believers" still seizes on every new
discovery of a chemical or organic compound found in space to try to move the
argument back to Darwin's original starting point that somehow life assembled
itself on Earth "naturally".
However, most objective scholars now accept that the first forms of life had
to have been delivered because: (1) they appear as two groups of multiple
prokaryotes (archaea and true bacteria); (2) they appear whole and complete; (3)
the hellish primordial Earth is unimaginable as an incubator for burgeoning
life; and (4) a half-billion years seems far too brief a time-span to permit a
gradual, step-by-step assembly of the incredible complexity of prokaryotic
biology and biochemistry.
Even more damaging to the hard-core Darwinist position is that the
prokaryotes were--quite propitiously--as durable as life gets. They were
virtually indestructible, able to live in absolutely any environment--and
they've proved it by being here today, looking and behaving the same as when
their ancestors were fossilised 4.0 billion years ago. Scalding heat? We love
it! Choked by saline? Let us at it! Frozen solid? We're there! Crushing
pressure? Perfect for us! Corrosively acidic? Couldn't be better!
Today they are known as extremophiles, and they exist alongside many other
prokaryotic bacteria that thrive in milder conditions. It would appear that
those milder-living prokaryotes could not have survived on primordial Earth, so
how did they come to be? According to Darwinists, they "evolved" from
extremophiles in the same way humans supposedly evolved on a parallel track with
apes--from a "common ancestor".
Darwinists contend such parallel tracks don't need to be traceable. All
that's required is a creature looking reasonably like another to establish what
they consider a legitimate claim of evolutionary connection. Extremophiles
clearly existed: we have their 4.0-billion-year-old fossils. Their descendants
clearly exist today, along with mild-environment prokaryotes that must have
descended from them. However, transitional forms between them cannot be found,
even though such forms are required by the tenets of Darwinism. Faced with that
embarrassing problem, Darwinists simply insist that the missing transitional
species do exist, still hidden somewhere in the fossil record, just as the
"missing link" between apes and humans is out there somewhere and will indeed be
discovered someday. It's simply a matter of being in the right place at the
For as expedient as the "missing link" has been, it's useless to explain the
next phase of life on Earth, when prokaryotes began sharing the stage with the
much larger and much more complex (but still single-celled) eukaryotes, which
appear around 2.0 billion years ago. The leap from prokaryote to eukaryote is
too vast even to pretend a missing evolutionary link could account for it. A
dozen would be needed just to cover going from no nucleus to one that functions
fully. (This, by the way, is also true of the leap between so-called pre-humans
and humans, which will be discussed in Part Two).
How to explain it? Certainly not plausibly. Fortunately, Darwinists have
never lacked the creativity to invent "warm-pond" scenarios to plug holes in
DOING THE DOGMA SHUFFLE
Since it's clear that a "missing link" won't fly over the prokaryote&endash;eukaryote
chasm, why not assume some of the smaller prokaryotes were eaten by some of the
larger ones? Yeah, that might work! But instead of turning into food, energy and
waste, the small ones somehow turn themselves--or get turned into--cell nuclei
for larger ones. Sure, that's a keeper! Since no one can yet prove it didn't
happen (Thank God!), Darwinists are able to proclaim it did. (Keep in mind, when
any critic of Darwinist dogma makes a suggestion that similarly can't be proved,
it's automatically dismissed, because "lack of provability" is a death sentence
outside their fraternity. Inside their fraternity, consensus is adequate because
the collective agreement of so many "experts" should be accepted as gospel.)
To Interventionists like me, the notion of prokaryotes consuming each other
to create eukaryotes is every bit as improbable as the divine fiat of
Creationists. But even if it were a biological possibility (which most evidence
weighs against), it would still seem fair to expect "transition" models
somewhere along the line. Darwinists say "no" because this process could have an
"overnight" aspect to it. One minute there's a large prokaryote alongside a
small one, the next minute there's a small eukaryote with what appears to be a
nucleus inside it. Not magic, not a miracle, just a biological process unknown
today but which could have been possible 2.0 billion years ago. Who's to say,
except an "expert"? In any case, large and small prokaryotes lived side by side
for 2.0 billion years (long enough, one would think, to learn to do so in
harmony), then suddenly a variety of eukaryotes appeared alongside them, whole
and complete, ready to join them as the only game in town for another 1.4
billion years (with no apparent changes in the eukaryotes, either).
At around 600 million years ago, the first multicellular life- forms (the
Ediacaran Fauna) appear--as suddenly and inexplicably as the prokaryotes and
eukaryotes. To this day, the Ediacaran Fauna are not well understood, beyond the
fact they were something like jellyfish or seaweeds in a wide range of sizes and
shapes. (It remains unclear whether they were plants or animals, or a bizarre
combination of both.) They lived alongside the prokaryotes and eukaryotes for
about 50 million years, to about 550 million years ago, give or take a few
million, when the so-called "Cambrian Explosion" occurred.
It's rightly called an "explosion", because within a period of only 5 to 10
million years--a mere eye-blink relative to the 3.5 billion years of life
preceding it--the Earth's oceans filled with a dazzling array of seawater plants
and all 26 of the animal phyla (body types) catalogued today, with no new phyla
added since. No species from the Cambrian era looks like anything currently
alive--except trilobites, which seem to have spawned at least horseshoe crabs.
However, despite their "alien" appearance, they all arrived fully
assembled--males and females, predators and prey, large and small, ready to go.
As in each case before, no predecessors can be found.
THE PACE HEATS UP
Volumes have been written about the Cambrian Explosion and the menagerie of
weird plants and animals resulting from it. The Earth was simply inundated with
them, as if they'd rained down from the sky. Darwinists concede it is the
greatest difficulty--among many--they confront when trying to sell the
evolutionary concept of gradualism. There is simply no way to reconcile the
breathtaking suddennessÉthe astounding varietyÉthe overwhelming incongruity of
the Cambrian Explosion. It is a testament to the old adage that "one ugly fact
can ruin the most beautiful theory". But it's far from the only one.
All of complex life as we understand it begins with the Cambrian Explosion,
in roughly the last 550 million years. During that time, the Earth has endured
five major and several minor catastrophic extinction events. Now, one can
quibble with how an event catastrophic enough to cause widespread extinctions
could be called "minor", but when compared to the major ones the distinction is
apt. The five major extinction events eliminated 50% to 90% of all species of
plants and animals alive when the event occurred.
We all know about the last of those, the Cretaceous event of 65 million years
ago that took out the dinosaurs and much of what else was alive at the time. But
what few of us understand is the distinctive pattern to how life exists between
extinction events and after extinction events. This difference in the pattern of
life creates serious doubts about "gradualism" as a possible explanatory
mechanism for how species proliferate.
Between extinction events, when environments are stable, life doesn't seem to
change at all. The operative term is stasis. Everything stays pretty much the
same. But after extinction events, the opposite occurs: everything changes
profoundly. New life-forms appear all over the place, filling every available
niche in the new environments created by the after-effects of the catastrophe.
Whatever that is, it's not gradualism.
In 1972, (the late) Stephen J. Gould of Harvard and Niles Eldredge of the
American Museum of Natural History went ahead and bit the bullet by announcing
that fact to the world. Gradual evolution simply was not borne out by the fossil
record, and that fact had to be dealt with. Darwin's view of change had to be
modified. It wasn't a gradual, haphazard process dictated by random, favourable
mutations in genes. It was something else.
That "something else" they called punctuated equilibrium. The key to it was
their open admission of the great secret that life-forms only changed in spurts
after extinction events, and therefore had nothing to do with natural selection
or survival of the fittest or any of the old Darwinist homilies that everyone
had been brainwashed to believe. It was the first great challenge to Darwinian
orthodoxy, and it was met with furious opposition. The old guard tagged it "punk
eek" and called it "evolution by jerks".
TRUTH AND CONSEQUENCES
What Gould and Eldredge were admitting was the great truth that evolution by
natural selection is not apparent in either the fossil record or in the life we
see around us. The old guard insisted that the fossil record simply had to be
wrongÉthat it wasn't giving a complete picture because large tracts of it were
missing. That was true, but much larger tracts were available, and those tracts
showed the overwhelming stasis of life-forms in every era, followed by rapid
filling of environmental niches after each extinction event. So while parts of
the record were indeed missing, what was available was unmistakable.
Arguments raged back and forth. Explanations were created to try to counter
every aspect of the punk-eek position. None was ever particularly convincing,
but they began to build up. Remember, scientists have the great advantage of
being considered by one and all as "experts", even when they haven't the
slightest idea of what they're talking about. That allows them to throw shot
after shot against the wall until something sticks, or until the target of their
wrath is covered in so much "mud" that it can't be seen any more. Such was the
fate of the punk-eekers. By the early 1990s, they'd been marginalised.
One can hardly blame the old-guard Darwinists for those attacks. If granted
any credence, the sudden radiations of myriad new species into empty
environmental niches could have gutted many of the most fundamental tenets of
gradual, "natural" evolution. That idea simply could not become established as a
fact. Why? Because the warm pond was drained dry, biochemistry was rendering the
"small-eaten-by-large prokaryotes turned into eukaryotes" story absurd, and the
Cambrian Explosion was flatly inexplicable. If "sudden radiation" were heaped
onto all of that, the entire theory of evolution could flounderÉand where would
that leave Darwinists? Facing righteous Creationists shouting, "See! God did do
it after all!" Whatever else the Darwinists did, they couldn't allow that to
Speaking as an Interventionist, I don't blame them. To me, God stands on
equal footing with the lightning bolt. I see a better, far more rational answer
to the mysteries of how life came to be on planet Earth: it was put here by
intelligent beings, and it has been continuously monitored by those same beings.
Whether it's been developed for a purpose or toward a goal of some kind seems
beyond knowing at present, but it can be established with facts and with data
that intervention by outside intelligence presents the most logical and most
believable answer to the question of how life came to be here, as well as of how
and why it has developed in so many unusual ways in the past 550 million years.
So now we come to the crux.
Darwinists go through life waving their PhD credentials like teacher's pets
with a hall pass, because it allows them to shout down and ridicule off the
public stage anyone who chooses to avoid the years of brainwashing they had to
endure to obtain those passes. However, their credentials give them "influence"
and "credibility" with the mainstream media, who don't have the time, the
ability or the resources to make certain that everything every Darwinist says is
true. They must trust all scientists not to have political or moral agendas, and
not to distort the truth to suit those agendas. So, over time, the media have
become lapdogs to the teacher's pets, recording and reporting whatever they're
told to report, while dismissing out of hand whatever they're told to dismiss
out of hand.
Despite Darwinists' rants that those who challenge them do so out of
blithering idiocy, that is not always the case. For that matter, their opponents
are not all Creationists, or even Intelligent Designers, whom Darwinists labour
feverishly to paint into the "goofy" corner where Creationists rightly reside.
So Interventionists like me have few outlets for our ideas, and virtually none
in the mainstream media. Nevertheless, we feel our view of the origin of life
makes the best sense, given the facts as they are now known, and the most basic
aspect of our view starts with what I once called "cosmic dump trucks". However,
that term has been justly criticised as facetious, so now I call them "cosmic
Imagine this scenario: a fleet of intergalactic "terraformers" (another term
I favour) cruises the universe. Their job is to locate forming solar systems and
seed everything in them with an array of basic, durable life-forms capable of
living in any environment, no matter how scabrous. Then the terraformers return
on a regular basis, doing whatever is needed to maximise the capacity for life
within the developing solar system. Each system is unique, calling for
specialised forms of life at different times during its development, which the
terraformers provide from a wide array of cosmic arks at their disposal.
With that as a given, let's consider what's happened on Earth. Soon after it
began to coalesce out of dust and gas, two forms of virtually indestructible
bacteria appeared on it, as if someone knew precisely what to deliver and when.
Also, it would make sense that every other proto-planet in the solar system
would be seeded at the same time. How could even terraformers know which forming
planets would, after billions of years, become habitable for complex life? And
guess what? A meteorite from Mars seems to contain fossilised evidence of the
same kinds of nano- (extremely small) bacteria found on Earth today. All other
planets, if they're ever examined, will probably reveal similar evidence of a
primordial seeding. It would make no sense for terraformers to do otherwise.
THE RUST ALSO RISES
So, okay, our solar system is noticed by intergalactic terraformers as the
new sun ignites and planets start forming around it. On each of the planets they
sprinkle a variety of two separate forms of single-celled bacteria they know
will thrive in any environment (the extremophiles). But the bacteria have a
purpose: to produce oxygen as a component of their metabolism. Why? Because life
almost certainly has the same basic components and functions everywhere in the
universe. DNA will be its basis, and "higher" organisms will require oxygen to
fuel their metabolism. Therefore, complex life can't be "inserted" anywhere
until a certain level of oxygen exists in a planet's atmosphere.
Wherever this process is undertaken, the terraformers have a major problem to
deal with: iron. Iron is an abundant element in the universe. It is certainly
abundant in planets (meteorites are often loaded with it). Iron is very reactive
with oxygen: that's what rust is all about. So on none of the new planets
forming in any solar system can higher life-forms develop until enough oxygen
has been pumped into its atmosphere to oxidise most of its free iron. This, not
surprisingly, is exactly what the prokaryotes did during their first 2.0 billion
years on Earth. But it had to be a two-part process.
The proto-Earth would be cooling the whole time, so let's say full cooling
takes roughly 1.0 billion years. So the extremophiles would be the first batch
of prokaryotes inserted because they could survive it. Then, after a billion
years or so, the terraformers return and drop off the rest of the prokaryotes,
the ones that can live in milder conditions. Also, they have to keep returning
on a regular basis because each planet would cool at a different rate due to
their different sizes and different physical compositions.
However many "check-up" trips are required, by 2.0 billion years after their
first seeding of the new solar system the terraformers realise the third planet
from the sun is the only one thriving. They are not surprised, having learned
that a "zone of life" exists around all suns, regardless of size or type. Now
that this sun has taken its optimum shape, they could have predicted which
planet or planets would thrive. In this system, the third is doing well but the
fourth one is struggling. It has its prokaryotes and it has water, but its
abundance of iron (the "red" planet) will require longer to neutralise than such
a small planet with a non-reactive core will require to cool off, so it will
lose its atmosphere to dissipation into space before a balance can be achieved.
The fourth planet will become a wasteland.
The terraformers carry out the next phase of planet-building on the thriving
third by depositing larger, more complex, more biologically reactive eukaryotes
to accelerate the oxidation process. Eukaryotes are far more fragile than
prokaryotes, so they can't be put onto a forming planet until it is sufficiently
cooled to have abundant land and water. But once in place and established, their
large size (relative to prokaryotes) can metabolise much more oxygen per unit.
Together, the fully proliferated prokaryotes and eukaryotes can spew out enough
oxygen to oxidise every bit of free iron on the Earth's crust and in its seas,
and before long be lacing the atmosphere with it.
Sure enough, when the terraformers return in another 1.4 billion years they
find Earth doing well, but the situation on Mars is unimproved: rust as far as
the eye can see. (Mars is likely to have at least prokaryotic life, because
there wouldn't have been enough oxygen in the surface water it once had--or in
the permafrost it still has--to turn its entire surface into iron oxide.) Earth,
however, is doing fine. Most of its free iron is locked up as rust, and oxygen
levels in the atmosphere are measurably increasing. It's still too soon to think
about depositing highly complex life, but that day is not far off now,
measurable in tens of millions of years rather than in hundreds of millions. For
the moment, Earth is ready for its first load of multicellular life, and so it
is deposited: the Ediacaran Fauna.
Though scientists today have no clear understanding of what the Ediacarans
were or what their purpose may have been (because they don't exist today), it
seems safe to assume they were even more prolific creators of oxygen than the
If, indeed, terraformers are behind the development of life on Earth, nothing
else makes sense. If, on the other hand, everything that happened here did so by
nothing but blind chance and coincidence, it was the most amazing string of luck
imaginable. Everything happened exactly when it needed to happen, exactly where
it needed to happen, exactly how it needed to happen.
If that's not an outright miracle, I don't know what is.
MAKING BETTER SENSE
Assuming terraformers were/are responsible for seeding and developing life on
Earth, we can further assume that by 550 million years ago at least the early
oceans were sufficiently oxygenated to support genuinely complex life. That was
delivered en masse during the otherwise inexplicable Cambrian Explosion, after
which followed the whole panoply of "higher" forms of life on Earth as we have
come to know it. (The whys and wherefores of that process are, regrettably,
beyond the scope of this essay, but there are answers that have as much apparent
sense behind them as what has been outlined.)
During those 550 million years, five major and several minor extinction
events occurred, after each of which a few million years would pass while the
Earth stabilised with environments modified in some way by the catastrophes.
Some pre-event life-forms would persist into the new environments, to be joined
by new ark-loads delivered by the terraformers, who would analyse the situation
on the healing planet and deliver species they knew would survive in the new
environments and establish a balance with the life-forms already there (the
Interventionist version of punctuated equilibrium).
We've already seen the difficulties Darwinists have with trying to explain
the flow of life on Earth presented in the fossil record. That record can be
explained by the currently accepted Darwinian paradigm, but the veneer of
"scholarship" overlaying it is little different from the divine fiat of
Creationists. And it can be explained by Intelligent Designers, who claim
anything so bewilderingly complex couldn't possibly have been arrayed without
the guidance of some superior, unifying intelligence (which they stop short of
calling "God", because otherwise they are merely Creationists without cant).
Considering all of the above, we Interventionists believe the terraformer
scenario explains the fossil record of life on Earth with more creativity, more
accuracy and more logic than the others, and in the fullness of time will have a
far greater probability of being proved correct. We don't bother trying to
establish or even discuss who the terraformers are, or how they came to be,
because both are irrelevant and unknowable until they choose to explain it to
us. Besides, speculating about their origin detracts from the far more germane
issue of trying to establish that our explanation of life's origin makes better
sense than any other.
We will continue to be ignored by mainstream media simply because the idea of
intelligent life existing outside Earth is so frightening to the majority of
those bound to it. Among many reasons for fear, the primary one might be our
unfortunate habit of filtering everything beyond our immediate reality through
our own perceptions. Thus, we attribute to others the same traits and
characteristics we possess. Another bad habit appears when we discover new
technology. Invariably our first thought is: "How can we use this to kill more
of our enemies?" Collectively, we all have enemies we want to eliminate to be
done with the problem they present. Like it or not, this is a dominant aspect of
Because we so consistently project onto others the darkest facets of our
nature, we automatically assume--despite ET and Alf and other lovable depictions
in our culture--that real aliens will want to harm us. Consequently, we avoid
facing the possibility of their existence in every way we can. (Here I can
mention the obstinate resistance I have personally found to serious
consideration of the Starchild skull, which by all rights should have been
eagerly and thoroughly examined three years ago.)
So Interventionism is ignored because it scrapes too close to UFOs, crop
circles, alien abductions and every other subject that indicates we humans may,
in the end, be infinitesimally insignificant in the grand scheme of life in the
universe. There is much more to say about it, of course, especially as it
relates to human origins, but that has to wait until the second instalment of
For now, let the last word be that the last word on origins--of life and of
humans--is a long, long way from being written.
But when it is, I strongly suspect it will be Intervention.
HUMAN ORIGINS: CAN WE HANDLE THE TRUTH?
In Part One of this essay, I explained the Interventionist perspective
regarding the origin of life on Earth. I showed how the great preponderance of
evidence indicates life came here and did not develop here, as we have been
brainwashed to believe by generations of scientists struggling to keep the
creation myths of religion out of classrooms. Personally, I applaud and support
all efforts to keep the most specious aspects of Creationism safely bottled up
in houses of worship, where they belong. However, I have even more disdain for
scientists who allow themselves to be crushed to cowardly pulp by nothing more
debilitating than "peer pressure". Because both groups are so driven by their
collective fears and dogma, neither has a working grip on reality. That becomes
increasingly clear as research continues, which I believe was made evident in
Part One. Now let's try to do the same in Part Two, on human origins.
If anything riles Creationists and Darwinists alike, it's the suggestion they
might be wrong about how we humans have come to dominate our planet so
thoroughly. Both sides can tolerate substantial criticisms regarding the wide
array of subjects under their purviews, including the kind of critique I gave
the origins of life in Part One. However, they have no toleration for challenges
to their shared hegemony over the beginnings of us all. Dare that and you'll
find yourself in a serious fight. Thus, those of us who support the
Interventionist interpretation come under attack from both sides, not to mention
the other clique at the party, the educated subgroup of Creationists known as
Intelligent Designers (a brilliant choice of name that enforces their
bottom-line concept of a "grand designer", while simultaneously implying they
are smarter than anyone who would oppose them).
All sides seem to agree that humans are "special". Creationists and
Intelligent Designers consider it virtually self-evident that humans originated
by some kind of divine fiat. Creationists believe the instigator is a universal
"godhead" figure, which IDers water down to a more palatable "entity or system"
capable of generating order out of chaos, life out of the inanimate. Even
Darwinists will concede that many of our physical, emotional and intellectual
traits set us far apart from the primate ancestors they believe preceded us in
the biological process of evolution. However, despite our high degree of "specialness",
Darwinists fervently promote the dogma that even the most fanciful distinctions
separating us from our supposed ancestors can be explained entirely by "natural
As with the early life-forms discussed in Part One, there's nothing natural
THE EARLIEST PRIMATES
Darwinists believe the human saga begins with mouse-sized mammals called
insectivores (similar to modern tree shrews) that scurried around under the feet
of large dinosaurs, trying to avoid becoming food for smaller species. Then
comes the Cretaceous extinction event of 65 million years ago that took out the
dinosaurs and paved the way for those tiny insectivores to evolve over the next
few million years into the earliest primates, the prosimians (literally
pre-simians, pre-monkeys) of the early Palaeocene epoch, which lasts until 55
million years ago.
As with nearly all such aspects of Darwinist dogma, this is pure speculation.
There is, in fact, no clear indication of a transitional insectivore-to-prosimian
species at any point in the process. If any such transitional species had ever
been found, then countless more would be known and I wouldn't be writing this
essay. Darwinian evolution would be proved beyond doubt, and that would be the
end of it.
To read the fossil record literally is to discover the legitimacy of
punctuated equilibrium (discussed in Part One) as a plausible explanation. "Punk
eek", as detractors call it, points out that in the fossil record life-forms do
seem simply to appear on Earth, most often after extinction events but not
always. Both the supposed proto-primates and flowering plants appear during the
period preceding the Cretaceous extinction. They come when they come, so the
relatively sudden post-extinction appearance of the earliest primates, the
prosimians (lemurs, lorises, tarsiers), is one of many sudden manifestations.
In terms of human origins, it begs this question: did proto-primates actually
evolve into prosimians, into monkeys, into apes, into humans? Or did prosimians
appear, monkeys appear, apes appear, and humans appear? Or, in our "special"
case, were we created?
However it happened, there is a pattern. The earliest prosimians are found in
the fossil record after the Mesozoic/Cenozoic boundary at 65 million years ago.
It is assumed their ancestors will someday be found as one of countless "missing
links" needed to make an airtight case for Darwinian evolution. Prosimians
dominate through the Palaeocene and the Eocene, lasting from 65 to 35 million
years ago. (There won't be a test on terms or dates, so don't worry about
memorising them; just try to keep the time-flow in mind.) At 35 million years
ago, the Oligocene epoch begins and the first monkeys come with it.
Again, Science assumes that monkeys evolved from prosimians, even though
evidence of that transition is nowhere in sight. In fact, there is strong
evidence pointing in the other direction, toward the dreaded stasis of
punctuated equilibrium. The lemurs, lorises and tarsiers of today are
essentially just as they were 50 million years ago. Some species have gone
extinct while others have modified into new forms, but lemurs and lorises still
have wet noses and tarsiers still have dry, which seems always to have been the
case. That's why tarsiers are assumed to be responsible for spinning off monkeys
and all the rest.
Monkeys start appearing at 35 million years ago, looking vastly different
from prosimians. There are certain physiological links, to be sure, such as
grasping hands and feet to permit easy movement through trees. However,
prosimians cling and jump to move around, while monkeys favour
brachiating--swinging along by their arms. Also, prosimians live far more by
their sense of smell than do monkeys. This list goes on.
The reason they're linked in an evolutionary flowchart is because they seem
close enough in enough ways to make the linkage stick. Simple as that. Science
focuses on the similarities and tries hard to ignore their gaping discrepancies,
assuming--as always--that there is plenty of time for evolution to do its magic
and generate those inexplicable differences.
For the next 10 million years the larger, stronger, more "advanced" monkeys
compete with prosimians for arboreal resources, quickly gaining the upper hand
over their "ancestors" and driving several of them to extinction.
Then, at around 25 million years ago, the Miocene epoch brings the first apes
into the fossil record, as suddenly and inexplicably as all other primates
appear. Again, Science insists they evolved from monkeys, but the evidence to
support that claim is as specious as the prosimian&endash;monkey link. The
transitional bones needed to support it are simply not in the fossil record.
If this isn't a distinct pattern of punctuated equilibrium, then what is?
THE PUZZLING MIOCENE
In terms of primate evolution, the Miocene makes little sense. By 25 million
years ago, when it begins, prosimians have been around for about 30 million
years and monkeys for 10 million years. Yet in the Miocene's ample fossil
record, prosimians and monkeys are rare, while the new arrivals, the apes, are
all over the place.
The Miocene epoch stretches from 25 million to 5.0 million years ago. (These
are approximations quoted differently in various sources; I round off to the
easiest numbers to keep track of.) During those 20 million years, the apes
flourish. They produce two-dozen different genera (types), and many have more
than one species within the genus. Those apes come in the same range of sizes
they exhibit today, from smallish gibbon-like creatures, to mid-range
chimp-sized ones, to large gorilla-sized ones, to super-sized Gigantopithecus,
known only by many teeth and a few mandibles (jawbones) from India and China.
That's another interesting thing about Miocene apes: their fossils are found
literally everywhere in the Old World--Africa, Europe, Asia. Most of them are
known by the durable teeth and jaws that define Gigantopithecus, while many
others supply enough post-cranial (below the head) bones to grant a reasonably
clear image of them. They present an interesting mix of anatomical features.
Actually, "confusing" is more like it. They are clearly different from monkeys
in that they have no tails, just like modern apes. However, their arms tend to
be more like monkey arms--the same length as their legs. Modern ape arms are
significantly longer than their legs so they can "walk" comfortably on their
front knuckles. More than any other reason, this is why we hear so little from
anthropologists about Miocene apes. Their arms don't make sense as the forelimbs
of an ancestral quadruped. Miocene arms fit better withÉsomething else.
This is not to say, of course, that no ape arms in the Miocene fossil record
are longer than legs. That's nowhere near to being determined because many
species--like Gigantopithecus--have yet to provide their arm bones. However,
since we do have some tailless, ape-like bodies with monkey-like arms and hands,
we have to consider how such a hybrid would move around. Swing through trees by
its arms, like a monkey? Not likely. Monkey arms are designed to carry a
monkey's slight body. An ape's body needs to be brachiated and leveraged by an
ape's much longer, stouter, stronger arms. So how aboutÉwalking?
From a physiological standpoint, an ape-like body with monkey-like arms and
hands does not move as easily or comfortably as a quadruped (down on all fours).
It simply can't happen. In fact, there's really only one posture that lends
itself to the carriage of such a monkey-ape hybrid, and that's upright. Go to a
zoo and watch how much easier monkeys--tails and all--stand upright compared to
apes. Any monkey can move with grace on its hind legs. In comparison, apes are
blundering, top-heavy oafs. Thus, it seems likely that at least some of the
hybrid monkey-apes of the Miocene probably had to carry themselves upright, in
opposition to the other apes of the era bearing the longer, thicker arms of
gibbons, orang-utans, chimpanzees and gorillas. Remember, we're talking about
two dozen genera and around 50 species.
WALKING THE WALK
Walking is critical to an understanding of human origins because Darwinists
feel it is the factor that set our ancestors on the road to becoming us. The
theory is that around 5.0 to 10 million years ago, when the heavy forests
blanketing Africa began shrinking, some forest-dwelling quadrupedal Miocene apes
still living then (there had been the inevitable extinctions and speciations
during the preceding 15 to 20 million years) began to forage on the newly
forming savannas. Though terribly ill-equipped to undertake such a journey (more
about that later), several ape species supposedly took the risk by learning to
stand upright to see out over the savanna grasses to scout for predators.
Then--after millennia of holding that position for extended periods--they
adopted constant upright posture. In doing so, one of those daring, unknown
species took the real "giant step for mankind".
No one can yet say which of the early upright-walking "pre-humans" went on to
become us, because the physiological gaps between us and them are simply
enormous. In fact, physically, the only significant thing we have in common with
those early ancestors is upright posture. But even that reveals noticeable
Incredibly, we have the walking trail of at least two early pre-humans at 3.5
million years ago. Found in Laetoli, Tanzania, these tracks were laid down on a
volcanic ash fall that was then covered by another ash fall and sealed until
their discovery by Mary Leakey's team in 1978. Photos of that trail are common
and can be accessed in any basic anthropology textbook or on the Internet. What
is not commonly portrayed, however, is that detailed analysis of the pressure
points along the surface of those prints indicates something that would be
expected: they didn't walk like us. After all, 3.5 million years is a long time,
and from a Darwinist standpoint it's logical to assume extensive evolution would
occur. But whether it was evolution or not, our methods of locomotion are
Humans have a distinctive carriage that starts with a heel strike
necessitated by our ankles placed well behind the midpoint of our feet. After
the heel strike, our forward momentum is swung to the left or right, out to the
edges of our feet to avoid our arches (in normal feet, of course). Once past the
arch, there's a sharp swing of the momentum through the ball of the foot from
outside all the way to the inside, where momentum is gathered and regenerated in
the powerful thrust of the big toe, with the four small toes drawing themselves
up to act as balancers. (Watch your own bare feet when you take a step and
you'll see those final "thrust-off" stages in action.)
The pre-humans at Laetoli walked with marked differences. Instead of having a
heavy heel-strike leading the way, their ankle was positioned at the centre
balance point of the foot, allowing it to come down virtually flat with an
almost equal distribution of weight and momentum between the heel and the ball
area. Instead of a crazy momentum swing out and around the arch, their arches
were much smaller and the line of momentum travelled nearly straight along the
midline of the entire foot. That made for a much more stable platform for
planting the foot and toeing off into the next step, which was done by
generating thrust with the entire ball area rather than with just the big toe.
When you get right down to it, the Laetoli stride was a superior technique to
the one we utilise now.
Slow-motion studies of humans walking show that we do virtually everything
"wrong". Our "heel-strike, toe-off" causes a discombobulation that courses up
our entire body. We are forced to lock our knees to handle the torque as our
momentum swings out and around our arches. Because of that suspended moment of
torque absorption, we basically have to fall forward with each step, which is
absorbed by our hip joints. Meanwhile, balance is assisted by swinging our arms.
Because of those factors, we don't walk with anything approaching optimum
efficiency, and the stresses created in us work, over time, to deteriorate our
joints and eventually cripple us. In short, we could use a re-design.
What we actually need to do is to walk more like the pre-humans at Laetoli.
In order to secure that heel-and-toe plant with each step, we'd have to modify
our stride so our knees weren't locked and we weren't throwing ourselves forward
through our hip joints. We'd have to keep our knees in a state of continual
flexion, however slight, absorbing all the stress of walking in our thighs and
buttocks, which both are designed to accommodate. This would provide us with a
"gliding" kind of stride that might look unusual (it would resemble the classic
Groucho Marx bent-kneed comedic walk), but would actually be much less
stressful, much less tiring and incredibly more efficient physiologically.
Based on the evidence of the Laetoli tracks, this is exactly how they walked.
WHAT'S WRONG WITH THIS PICTURE?
When Darwinists present reconstructions of so-called "pre-humans", invariably
they look nothing like humans.
Lucy and her Australopithecus relatives were little more than upright-walking
chimpanzees. The robust australopithecines were bipedal gorillas. The genus Homo
(habilis, erectus, Neanderthals and other debatable species) was a distinct
upgrade, but still nowhere near the ballpark of humanity. Only when the
Cro-Magnons appear, as suddenly and inexplicably as everything else, at around
120,000 years ago in the fossil record, do we see beings that are unmistakably
The Laetoli walkers lived 3.5 million years ago. Lucy lived around 3.2
million years ago. Recent discoveries show signs of pushing bipedal locomotion
back as far as 6.0 million years ago. So let's assume for the sake of discussion
that some primates were upright at no less than 4.0 million years ago.
Thus, from approximately 4.0 million years ago all the way to the appearance
of Cro-Magnons some time before 120,000 years ago (95% of the journey), all
pre-human fossils reveal distinctly non-human characteristics. They have thick,
robust bones--much thicker and more robust than ours. Such thick bones are
necessary to support the stress generated by extraordinarily powerful muscles,
far more powerful than ours. Their arms are longer than ours, especially from
shoulder to elbow. Their arms are also roughly the same length as their legs, à
la Miocene apes. And in every aspect that can be quantified--every one!--their
skulls are much more ape-like than human-like. Those differences hold from
australopithecine bones to the bones of Neanderthals--which means that something
quite dramatic happened to produce the Cro-Magnons, and it wasn't the result of
an extinction event. It wasÉsomething else.
The chasm between Cro-Magnons (us) and everything else that comes before them
is so incredibly wide and deep that there is no way legitimately to connect the
two, apart from linking their bipedal locomotion. All of the so-called
"pre-humans" are much more like upright-walking chimps or upright-walking
gorillas than they are incipient humans. Darwinists argue that this is why they
are called pre-humans, because they are so clearly not human.
But another interpretation can be put on the fossil record--one that fairly
and impartially judges the facts as they exist, without the "spin" required by
Darwinist dogma. That spin says that the gaping physiological chasm between
Neanderthals and Cro-Magnons can be plausibly explained with yet another
LOOKING BACK TO SEE AHEAD
Darwinists use the missing link to negate the fact that Cro-Magnons appear
out of nowhere, looking nothing like anything that has come before. What they
fail to mention is that dozens of such links would be needed to show any kind of
plausible transition from any pre-human to Cro-Magnons. It clearly didn't
happen--and since they're experts about such things, they know it didn't happen.
However, to acknowledge that would play right into the desperate hands of
Creationists and Intelligent Designers, not to mention give strong support to
Interventionists like me. They face a very big rock or a very hard place.
Let's accept for the moment that in Darwinian terms there is no way to
account for the sudden appearance of Cro-Magnons (humans) on planet Earth. If
that is true, then what about the so-called "pre-humans"? What are they the
ancestors of? Their bones litter the fossil record looking very unlike humans,
yet they clearly walk upright for at least 4.0 million years, and new finds
threaten to push that back to 6.0 million years. Even more likely is that among
the 50 or more species of Miocene apes, at least a few are walking upright as
far back as 10 to 15 million years ago. If we accept that likelihood, we finally
make sense of the deep past while beginning for the first time to see ourselves
We can be sure that at least four of the 50 Miocene apes were on their way to
becoming modern quadrupeds, because their descendants live among us today.
Equally certain is that others of those 50 walked out of the Miocene on two
legs. Technically these are called hominoids, which are human-like beings that
are clearly not human. In fact, every bipedal fossil preceding Cro-Magnon is
considered a hominoid--a term that sounds distinctly outside the human lineage.
So Darwinists have replaced it in common usage with the much less specific
"pre-human", which not so subtly brainwashes us all into believing there is no
doubt about that connection. And that brainwashing works.
We are further brainwashed to believe there are no bipedal apes alive in the
world today, despite hundreds of sightings and/or encounters with such bipedal
apes every year on every continent except Antarctica. Darwinists brainwash us to
ignore such reports by showering them with ridicule. They call such creatures
"impossible", and hope the weight of their credentials can hold reality at bay
long enough for them to figure out what to do about the public relations
catastrophe they will face when the first hominoid is brought onto the world
stage--dead or alive. That will be the darkest day in Darwinist history, because
their long charade will be officially over. The truth will finally be
undeniable. Bigfoot, the Abominable Snowman and several relatives are absolutely
IF THE SHOE FITSÉ
I'm not going to waste time and space here going over the mountain of
evidence that is available in support of hominoid reality. I cover it
extensively in the third part of my book, Everything You Know Is Wrong, and
there are many other books that cover one or more aspects of the subject. If you
care to inform yourself about the reality of hominoids, you won't have any
trouble doing so. And the evidence is solid enough to hold up in any court in
the world, except the court of public opinion manipulated by terrified
Darwinists. However, I will go over a few points that bear directly on the
question of human origins.
Let's grant a fairly obvious assumption: that the thousands of ordinary
people who have described hominoid sightings and encounters over the past few
hundred years (yes, they go back that far in the literature) were in fact seeing
living creatures rather than Miocene ghosts. And no matter where on Earth
witnesses come from, no matter how far from the beaten path of education and/or
modern communications, they describe what they see with amazing consistency. To
hear witnesses tell it, the same kinds of creatures exist in every heavily
forested or canopied environment on the planet--which is precisely what we would
expect if they did indeed stride out of the Miocene epoch on two legs.
Furthermore, what witnesses describe is exactly what we would expect of
upright-walking apes. They are invariably described as having a robust, muscular
body covered with hair, atop which sits a head with astonishingly ape-like
features. In short, the living hominoids are described as having bodies we would
expect to find wrapped around the bones found in the so-called "pre-human"
fossil record. In addition, witnesses describe what they see as having longer
arms than human arms, hanging down near their knees, which means those arms are
approximately the length of their legs. Witnesses also contend that the
creatures walk with a "gliding" kind of bent-kneed stride that leaves tracks
eerily reminiscent of the tracks left at Laetoli 3.5 million years ago.
Now we come to the crux for Darwinists, Creationists and Intelligent
Designers. Evidence supporting the reality of hominoids is overwhelming. Truly.
And if they are real, it means the "pre-human" fossil record is actually a
record of their ancestors, not ours. And if that's the case, then humans have no
place on the flowchart of life on Earth. And if that's true, then it's equally
clear that humans did not evolve and could not have evolved here the way
Darwinists claim. And if we didn't evolve here, that opens the door to the
Interventionist position that nothing evolved here: everything was brought or
created by sentient off-world beings whom I call terraformers, whose means and
motivation will remain unknown to us unless and until they see fit to explain
themselves. I hope no one is holding their breath.
The point is that the Miocene epoch had the means to produce living
hominoids--50 or more different species (which almost certainly will be shaved
down to perhaps a dozen as more complete bodies are found) as far back as 20
million years ago. It produced some with monkey-like arms better suited to an
upright walker than a brachiating tree-dweller or knuckle walker.
By the time it ended, 5.0 million years ago, a half-dozen or more bipedal
apes were on the Earth, which we know from the ape-like australopithecine and
early Homo fossils. And we know from Laetoli that they had a walking pattern
distinct from humans, which modern witnesses describe as still being the way
hominoids walk. In short, they've followed the punctuated equilibrium pattern of
SO WHAT ABOUT HUMANS?
Humans simply do not fit the pattern of primate development on Earth. Notice
the word development instead of evolution. Species that appear here do undergo
changes in morphology over time. It's called microevolution, because it
describes changes in body parts. Darwinists use the undeniable reality of
microevolution to extrapolate the reality of macroevolution, which is change at
the species-into-more-advanced-species level. That is blatantly not evident in
the fossil record, especially when it comes to human physiology.
We have shown, I hope, that humans have been shoehorned by Darwinists into
having a place in the fossil record that doesn't belong to them but to living
hominoids (Bigfoot, etc.). Furthermore, humans have been shoehorned into being
primates, when there is little about them--certainly nothing of
significance--that fits the classic primate pattern. In fact, if it weren't for
the desperate need of Darwinists to keep humans closely linked to the primate
line, we would have had our own appellation long ago--and we'll surely have it
once the truth is out from the Pandora's box of Darwinist deception.
Relatively speaking, primate bones are much thicker and heavier than human
bones. Primate muscles are five to 10 times stronger than ours. (Anyone who's
dealt with monkeys knows how amazingly strong they are for their size.) Primate
skin is covered with long, thick, visible hair. Ours is largely invisible.
Primate hair is thick on the back, thin on the front. Ours is switched the other
way around. Primates have large, round eyes capable of seeing at night. Compared
to theirs, we have greatly reduced night vision. Primates have small, relatively
"simple" brains compared to ours. They lack the ability to modulate sound into
speech. Primate sexuality is based on an oestrus cycle in females (though some,
like bonobo chimps, have plenty of sex when not in oestrus). In human females,
the effects of oestrus are greatly diminished.
This list could go on to cite many more areas of difference, but all of them
are overshadowed by the Big Kahuna of primate/human difference: all primates
have 48 chromosomes, while humans have "only" 46 chromosomes. Two entire
chromosomes represent a heck of a lot of DNA removed from the human genome, yet
somehow that removal made us "superior" in countless ways. It doesn't make
sense. Nor does the fact that even with two whole chromosomes missing from our
genome, we share what is now believed to be 95% of the chimp genome and around
90% of the gorilla genome. How can those numbers be made to reconcile? They
Something is wrong here. Someone has been cooking the genetic books.
THE STUFF OF LIFE
In the wild, plants and animals tend to breed remarkably true to their
species. That's why stasis is the dominant characteristic of life on Earth.
Species appear and stay essentially the same (apart from the superficial changes
of microevolution) until they go extinct for whatever reason (catastrophe,
inability to compete for resources effectively, etc.). When "faulty" examples
appear, they're nearly always unable to put the fault into their species'
collective gene pool. A negative mutation that doesn't kill the individual it
appears in is unlikely to be passed along to posterity, despite Darwinist
assertions that this is precisely how evolution occurs. All genomes have
hard-wired checks and balances against significant changes of any kind, which is
why stasis has been the hallmark of all life since beginning here. Aberrant
examples are efficiently weeded out, either early in the reproductive process or
soon after reproduction (birth). Faulty copies are deleted.
This deletion of faults holds true in the vast majority of species. Most
genomes are--and stay--remarkably clear of gene-based defects. All species are
susceptible to mistakes in the reproductive process, such as sperm/egg
misconnections. In mammals, this produces spontaneous abortions, stillbirths or
live-birth defects. However, there are precious few defects that swim in the
gene pools of any "wild" or "natural" species. The only places we find
significant, species-wide genetic defects are in domesticated plants and
animals, and in those they can be--and often are--numerous.
Domesticated plants and animals clearly seem to have been genetically created
by "outside intervention" at some point in the distant past. (For those
interested in learning more about this, I discuss it in considerable detail in
NEXUS 9/04.) Domesticated species have so many points of divergence from
wild/natural species, it's not realistic to consider them in any kind of
relative context. As we've seen above, the same holds true for humans and the
primates we supposedly evolved from. They're apples and oranges.
We humans have over 4,000 genetic defects spread throughout our common gene
pool. Think about that. No other species comes close. And yet, our mitochondrial
DNA proves we have existed as a species for "only" about 200,000 years. Remember
the first Cro-Magnon fossils showing up in strata 120,000 years old? That fits
well with the origin of a small proto-group at around 200,000 years ago. (There
will almost certainly be Cro-Magnon fossils found prior to 120,000 years ago,
but it is unlikely they were dispersed widely enough to have left fossils near
the 200,000-year mark. Naturally, the very first one could have been fossilised,
but that's not the way to bet. Fossilisation is quite rare.)
All that being the case, how did over 4,000 genetic defects work their way
into the human gene pool, when such genome-wide defects are rare to nonexistent
in wild or natural species? (Remember, Darwin himself noticed that humans are
very much like domesticated animals in many of our physical and biological
traits.) It can only have occurred if the very first members (no more than a
handful of breeding pairs) had the entire package of faults within their genome.
That's the only way Eskimos and Watusis and all the rest of humanity can express
the exact same genetic disorders.
If we descended from apes, as Darwinists insist, then apes should have a very
large number of our genetic defects. They do not. If, on the other hand, we've
been genetically unique for only 200,000 years, then the only way those defects
could be with us is if they were put into our gene pool by the genetic
manipulation of the founding generation of our species, and the mistakes made in
that process were left in place to be handed down to posterity. And, as might be
expected, this is also how domesticated plants and animals came to have their
own inordinate numbers of genetic defects. It simply couldn't happen any other
THE FINAL NAILÉ
When Einstein was asked in reference to relativity, "How did you do it?", he
replied, "I ignored an axiom." This is what everyone must do if we are to get
anywhere near the truth about human origins.
Darwinists ask us to believe a theory based on this axiom: "There are good
grounds to believe our early ancestors lived in forests. There are equally good
grounds to believe our later ancestors lived by hunting game on African
savannas. Therefore, we can assume that somehow, some way, we went from living
in forests to living on the savannas." The trick, for Darwinists, is in
explaining it plausibly.
Savanna theorists ask us to believe that, 5.0 to 10 million years ago,
several groups of forest-dwelling Miocene apes were squeezed by environmental
pressures to venture out onto the encroaching savannas to begin making their
collective living. This means they had to rise from the assumed quadrupedal
posture attributed to all Miocene apes to walk and run on two legs, thus giving
up the ease and rapidity of moving on all fours. Those early groups had to make
their way with unmodified pelvises, inappropriate single-arched spines, absurdly
under-muscled thighs and buttocks, and heads stuck on at the wrong angle, and
all the while doggedly shuffling along on the sides of long-toed, ill-adapted
feet, thereby becoming plodding skin-bags of snack-treats for savanna predators.
If any harebrained scheme ever deserved a re-think by its originator(s), this
would be the one.
Of course, the real re-think needs to be done by Darwinists, because it is
glaringly obvious that no forest-bound species of ape could have ventured onto
the savanna as a stumbling, bumbling walker and learned to do it better out
there among the big cats. If a collective group had been unfit for erect
movement on the savanna, they wouldn't have gone. If they did go, they couldn't
and wouldn't stay. Even primates are smarter than that. And understand, there
are primates that did make the move onto the savanna, albeit always remaining
within range of a high-speed scurry into nearby trees. Baboons are the most
successful of this small group, all of which have retained quadrupedal
In addition to the forest-to-savanna transition, Darwinists face numerous
other improbable--if not impossible--differences between humans and terrestrial
primates. In addition to bipedalism and the genetic discrepancies already
addressed, there are major differences in skin and the adipose tissue (fat)
beneath it; in sweat glands, in blood, in tears, in sex organs, in brain size
and function, and on and on and on. This is a very long list that can be
examined in much fuller detail in the work of a brilliant, determined researcher
into human origins, named Elaine Morgan.
Ms Morgan is the chief proponent of what challenged Darwinists derisively
call "the Aquatic Ape theory", as if the juxtaposition of those disparate words
were enough to dismiss it as an absurd notion. Nothing could be further from the
truth. In books like The Scars of Evolution (Souvenir Press, London, 1990), she
makes a devastating case against the notion that humans evolved from
forest-dwelling apes that moved out onto the savannas. She believes humans must
have gone through an extended period of development in and around water to
generate the bizarre array of physiological oddities we exhibit relative to the
primates we supposedly evolved from.
However, despite all her wonderfully creative work, Ms Morgan remains wedded
to the Darwinist concept of evolution, which had to play itself out in only the
200,000 years dictated by our mitochondrial DNA.
MAKING SENSE OF THE INSENSIBLE
The pieces of the puzzle are on the table. The answer is there for anyone to
see. But rearranging those pieces properly is no easy task, and it is even more
difficult to get dogmatists of any stripe to look at the picture in a light
different from their own. That has been my purpose in writing these two essays
on origins--of life and of humans. They are two of the world's most sensitive
areas of scholarship and debate, producing some of the most vitriolic exchanges
in all of academia. But vitriol, like might, doesn't make right.
I once knew a baseball player who'd pitched a no-hitter against a seriously
inferior team. Upon being criticised for the obvious imbalance between his
abilities and those of his opponents, the pitcher shrugged and said, "A
no-hitter is a no-hitter, even against Lighthouse for the Blind." And so it is
with a mistaken belief. If millions believe a thing, that doesn't make it
I believe that the facts, if fairly evaluated, will over time prove that
humans--and indeed, life itself--did not originate on Earth, and that nothing
has macroevolved on Earth. It has all been brought here and left to fend for
itself, then replaced when events required the introduction of new forms. No
other theory suits the facts nearly as well.
As for humans (the object of this essay), look back to the Miocene epoch,
where the earliest traces of our ancestors supposedly originate. Apes dominate.
Look at the fossils--the so-called "pre-humans"--from the Pliocene epoch,
starting 5.0 million years ago. Other than bipedal walking, all of their
physical aspects shout out "ape roots". Look at today's tracks, sightings and
encounters with living hominoids, Bigfoot and others. These all-too-real
creatures will one day be proved to have a direct link back to the
Miocene--which, at a stroke, will eliminate any possibility that humans and apes
share any kind of common ancestor.
We humans are not indigenous to planet Earth. We were either put here intact
or we developed here, but we did not evolve here. Our genes make clear that
we've been cut-and-pasted from other, non-primate, non-Earthly species.
Personally, I believe that the work of Zecharia Sitchin (The Earth
Chronicles) comes closest to a plausible explanation. But even if some aspects
of what he says are wrong, or even if all of it someday is proved to be wrong,
that won't change the basic facts that his work--and my own work--address.
Humans are not primates. We do indeed stand apart as a "special" creation,
long espoused by theologians and now by certain credentialled scientists. The
only question left hanging is, of course: who or what was the creator? I don't
think I'll be privileged to learn that in my lifetime. But I'm confident I'm
within reach of the next best answer.
I'm confident that we were created by invasive genetic manipulation.
DARWINISM vs. CREATIONISM
A Checkered History, A Doubtful Future
by Lloyd Pye
Starting with the Sumerians, the first great culture 6,000 years ago, through
the Egyptians, Greeks, and Romans, everyone accepted that some form of heavenly
beings hadcreated all of life and, as a crowning achievement, topped it off with
humans. Now, consider that for a moment. Today the CEO of a medium-sized
corporation can verbally issue an instruction to be carried out company-wide and
have no hope it will reach the lower echelons intact. So the fact that most
historical cultures, from first to most recent (our own), believed essentially
the same creation story is astonishing in its consistency.
Naturally, such long-term consistency made it extremely difficult to
challenge when the accumulation of scientific evidence could no longer be
ignored. Charles Darwin is usually credited with issuing the first call for a
rational examination of divine creation as the belief system regarding the
origins of life and humanity. However, in his 1859 classic, The Origin Of
Species, he skirted both issues in an attempt to placate his era’s dominant
power structure — organized religion. Though he used the word “origin” in the
title, he was careful to discuss only how species developed from each other, not
how life originated. And he simply avoided discussing humanity’s origins.
Ultimately, pressure from both supporters and critics forced him to tackle
that thorny issue in 1871’s The Descent Of Man; but Charles Darwin was never
comfortable at the cutting edge of the social debate he helped engineer.
The true roots of the challenge to divine creation extend 65 years prior to
Darwin, back to 1795, when two men — a naturalist and a geologist—published
stunning works. The naturalist was Erasmus Darwin, Charles Darwin’s grandfather,
a brilliant intellectual in his own right. In The Laws Of Organic Life he
suggested that population numbers drove competition for resources, that such
competition was a possible agent of physical change, that humans were closely
related to monkeys and apes, and that sexual selection could have an effect on
species modification. In short, he dealt with nearly all of the important topics
his grandson would later expand upon, except natural selection.
The geologist was a Scotsman, James Hutton, whose Theory Of The Earth
suggested for the first time that Earth might be much older than 6,000 years,
then the universally accepted time frame established a century earlier by
Anglican Bishop James Ussher. (Many if not most of today’s mainstream Christians
are convinced that the creation date of 6,000 years ago is Holy Writ, even
though mortal Bishop Ussher arrived at it by the mundane method of calculating
the who begat whoms listed in the Bible.)
Hutton studied the layering of soils in geological strata and concluded that
rain washed soil off the continents and into the seas; at the bottom of the seas
heat from inside the planet turned soil into rock; over great stretches of time
the new rocks were elevated to continent level and slowly pushed up to form
mountains; then in turn those mountains were weathered away to form new layers
of soil. This unending cycle meant two things: Earth was not a static body
changed only superficially at the surface by volcanoes and earthquakes; and each
layering cycle required vast amounts of time to complete.
The significance of Hutton’s insight, to which he gave the jawbreaker name of uniformitarianism, cannot be overstated. However, he couldn’t challenge Ussher’s
6,000 year dogma because he provided no alternative to it. He was certain that
6,000 years was much too short a time span for any weathering cycle to be
completed, but in the late 18th century there was no way to accurately measure
geological eras. That would have to wait another thirty-five years until Sir
Charles Lyell, a far more methodical British analyst and researcher, could
firmly establish uniformitarianism as the basis of modern geology.
Lyell took Hutton’s work and ran with it, creating a three-volume series
called Principles Of Geology (1830-1833) that convincingly provided the time
lines and time frames Hutton lacked. Bishop Ussher’s 6,000 year dogma still held
complete sway with ecclesiastics everywhere, but the world’s burgeoning ranks of
scientists could see that Hutton and now Lyell were correct; the earth had to be
millions of years old rather than 6,000. But how to convince the still largely
uneducated masses of Ussher’s fallacy? Like Hutton before him, Lyell and his
supporters could not break through the dense wall of ignorance being perpetuated
by religious dogma. However, they had knocked several gaping cracks in it, so
when Charles Darwin came along in another thirty years (1859), the wall was
ready to begin crumbling with an echo that reverberates to this day.
Darwin was strongly influenced by Lyell, who published the first of his
geology tomes while Darwin was at Cambridge completing his last year of
theological training (he only studied nature as an avocation). He took the first
volume of the trilogy on his fateful voyage aboard the H.M.S. Beagle and
devoured it along the way. Masterfully written and persuasively argued, it made
such an impression on the 22-year-old that in later life he said, “I really
think my books come half out of Lyell’s brain. I see through his eyes.” So
between Lyell’s genius and his grandfather Erasmus’ unconventional views about
nature instilled during his childhood, young Charles set sail toward his destiny
with a blueprint of his revolutionary theory in mind and a tool to build it in
Without saying it outright, Darwin’s bottom line was that life’s myriad forms
managed their own existence from start to finish without divine help. This did
not take God entirely out of the equation, but it did remove His influence on a
day-to-day basis. The irony is that Charles Darwin did his work reluctantly,
being a devout man who had trained to become a minister. Nonetheless, the schism
he created between evolution (a term he never used; his choice was natural
selection) and God was the battering ram that breached the forbidding wall of
dogmatic ignorance that had stood for thousands of years.
Though breached, that wall did not come down entirely. Instead, an
ideological war erupted on both sides of what remained of it, pitting Darwinists
against Creationists in intellectual bloodletting that eventually forced some of
the wounded to seek relief in compromise. Both sides might be content, they
suggested, if God could be acknowledged as the initiator of all life, followed
by a “hands-off” policy thereafter to let nature take its evolutionary course.
All well and good. But instead, both sides adopted a winner-take-all strategy,
unwilling to make even marginal concessions to the other side’s point of view.
Allowing no room for compromise left both sides open to continuous attack,
and the salvos they exchanged were fierce and relentless. James Hutton and
Charles Lyell had proven beyond reasonable doubt that the earth was immensely
older than 6,000 years, yet they and their supporters had been overwhelmed by
the oppressive power of ecclesiastic influence. Now, however, Darwin’s arguments
supporting gradual changes over equally vast amounts of time tipped the scales
in favor of science. Public opinion began to shift. The uniform rejection of old
became tentative acceptance at an ever-increasing rate.
This alarming turn of events forced all but the most ardent Creationists to
seek ways to appease their critics, to put themselves back in the driver’s seat
of public opinion. Bishop Ussher’s unyielding time line of 6,000 years was
gradually coming to symbolize their willful disdain of reality, like a chain
draped around their necks, drowning them as the tide of understanding shifted
the sand beneath their feet. They began to modify their insistence that God had
created everything in the universe exactly as recounted in the Bible. They could
suddenly see the wisdom of granting Him the latitude to accomplish His miracles
in six eras of unspecified length rather than in six literal days.
Of course, Creationists did more than hit the reverse pedal on their
sputtering juggernaut. The brightest of them dug deep into Darwin’s emerging
theory to discover holes nearly equal to the ones scientists were exposing in
religious dogma. In 1873, only fourteen years after The Origin Of Species,
geologist J.W. Dawson, chancellor of McGill University in Montreal, published
The Story Of The Earth And Man, which was every bit as well written and as
carefully argued as Darwin’s masterpiece. In it Dawson pointed out that Darwin
and his followers were promoting a theory based on three fallacious “gaps” in
reasoning that could not be reconciled with the knowledge of their era. What is
so telling about Dawson’s three fallacies is that they remain unchanged to this
The first fallacy is that life can spontaneously animate from organic
material. In 1873 Dawson complained that “the men who evolve all things from
physical forces do not yet know how these forces can produce the phenomenon of
life even in its humblest forms.” He added that “in every case heretofore, the
effort (to create animate life) has proved vain.” After 127 years of heavily
subsidized effort by scientists all over the world to create even the most basic
rudiments of life, they are still batting an embarrassing zero. In any other
scientific endeavor, reason would dictate it is time to call in the dogs and
water down the fire. But when it comes to Darwinian logic, as Dawson noted in
1873, “here also we are required to admit as a general principle what is
contrary to experience.”
Dawson’s second fallacy was the gap that separates vegetable and animal life.
“These are necessarily the converse of each other, the one deoxidizes and
accumulates, the other oxidizes and expends. Only in reproduction or decay does
the plant simulate the action of the animal, and the animal never in its
simplest forms assumes the functions of the plant. This gap can, I believe, be
filled up only by an appeal to our ignorance.” And thus it remains today. If
life did evolve as Darwinists claim, it would have had to bridge the gap between
plant and animal life at least once, and more likely innumerable times. Lacking
one undeniable example of this bridging, science is again batting zero.
The third gap in the knowledge of 1873 was “that between any species of
animal or plant and any other species. It is this gap, and this only, which
Darwin undertook to fill up by his great work on the origin of species; but,
notwithstanding the immense amount of material thus expended, it yawns as wide
as ever, since it must be admitted that no case has been ascertained in which
individuals of one species have transgressed the limits between it and other
species.” Here, too, despite a ceaseless din of scientific protests to the
contrary, there remains not a single unquestioned example of one species
evolving entirely—not just partially—into another distinct and separate species.
To be fair, some of today’s best-known geneticists and naturalists have
broken ranks and acknowledged that what Dawson complained about in 1873 remains
true today. Thomas H. Morgan, who won a Nobel Prize for work on heredity, wrote
that “Within the period of human history, we do not know of a single instance of
the transformation of one species into another if we apply the most rigid and
extreme tests used to distinguish wild species.” Colin Patterson, director of
the British Museum of Natural History, has stated that “No one has ever produced
a species by mechanisms of natural selection. No one has gotten near it.” And
these are by no means extraordinary disclosures. Every scientist in related
fields is well aware of it, but shamefully few have the nerve to address it
By the time Darwin died, in 1882, one of his most zealous supporters, German
zoologist Ernst Haeckel, had produced a series of drawings that showed the
developing embryos of various mammals (rabbit, pig, chimp, man) were virtually
identical until well into their gestation. This had been a great comfort to
Darwin in his old age, but by 1915 it was clear that Haeckel had forged the
drawings. Nonetheless, they served Darwinists so well that Haeckel’s forgery
conviction at the University of Jena, where he taught, was conveniently
overlooked, and his drawings can still be found in modern texts supporting
evolution. In fact, any reader of this article who was taught evolution in
school will very likely have seen Haeckel’s drawings in textbooks and been
assured they were legitimate.
A more widely known fraudulent attempt to support Darwin’s flagging theory
was England’s famous Piltdown Man hoax of 1912, which was an ancient human skull
found in conjunction with a modern orangutan’s lower jaw that had been doctored
(its teeth filed down to look more human) and aged to match the look of the
skull. This was much more important than Haeckel’s fraud because it provided the
desperately sought “missing link” between humans and their proposed ape-like
Nearly all of England’s evolutionary top guns swung in behind the fraud, and
their colleagues worldwide joined them with such zeal that it took 40 years to
expose it for what it was. However, the damage it caused to the search for truth
had already been done. The world became so convinced that Darwinian evolution
was true and correct, it was just a matter of time before Creationists would
draw a line in the dirt and call for a last great battle to decide the issue
once and for all. That battle did come, to an obscure American hamlet called
Dayton, Tennessee, 75 years ago (July, 1925).
The “Monkey Trial,” as H.L. Mencken dubbed it, revolved around John Scopes, a
24-year-old gym teacher and football coach who once substituted for the regular
biology teacher in Dayton’s high school. The American Civil Liberties Union
(ACLU) chose him as its point man because he vocally disagreed with a new
Tennessee law that banned the teaching of evolution instead of, or alongside,
the Biblical account of creation. He also was unmarried, incurring no risk to a
family by allowing himself to be prosecuted.
Though now one of many so-called “trials of the century,” this one drew 200
reporters from 2,000 newspapers across the country and the world. It has since
generated hundreds of books, plays, television movies, and feature films. In
October, 1999, George magazine chose it the fourth most important event of the
20th century. Yet historian Garry Wills has astutely called it “a nontrial over
a nonlaw with a nondefendant backed by nonsupporters. Its most profound moment
involved nontestimony by a nonexpert, followed by a nondefeat.” Without question
it can stand alongside the O.J. Simpson debacle as a world-class black eye for
the American legal system.
All during the trial Clarence Darrow, a staunch Darwinist and Scopes’ lawyer,
tangled with William Jennings Bryan, an equally staunch Creationist who
represented the State of Tennessee. Both were outstanding advocates and renowned
orators, and each was certain he could outtalk the other and convince the world
of the rightness of his vision of creation. However, Darrow’s rapier wit
shredded Bryan’s assertions that the Bible was a literal record of God’s
sacrosanct word. Bryan won from a legal standpoint because the issue in question
was whether Scopes had defied his state’s law, which he admitted all along in
order to get the trial arranged in the first place. Scopes was convicted and
fined $100, which was later overturned on a technicality, so in the end he was
More than anything else, the Monkey Trial was staged to settle the
Darwinism-Creationism debate once and for all by pitting the most eloquent
defender of each in a mouth-to-mouth duel on a world stage that no one could
ignore. And when the dust had settled it was clear the rolling tide of history
would not be turned. The mounting support for Darwinism crested in a tsunami of
doubt—and even ridicule—that crashed down on Creationists everywhere, sweeping
them from the dominant positions they had enjoyed for centuries, into the social
and political backwaters they endured for decades.
Though clearly knocked down by the Darrow/Scopes haymaker, the Creationists
were far from out. They lowered their profile and became relatively inactive
through the Depression and the years of World War II, waiting until society
stabilized in the 1950’s. Then they rallied their troops and resumed attacking
educational systems, where young minds were being indoctrinated with Darwinist
dogma. And this time they did it right. Instead of wasting effort and money
lobbying state legislatures, they moved out into the heartland and focused on
local school boards, insisting belief in evolution was costing America its faith
in God and religion, and destroying morality and traditional family life.
When the social eruptions of the 1960’s appeared, Creationists were quick to
say “We told you so!” They blamed the teaching of “Godless evolution” as a
primary cause, demanding that religion be put back in schools as a quick way to
return to “the good old days.” At the same time, they hit upon their most
brilliant tactic yet: formally changing their basic tenet from “Biblical
Creationism” to “Creation Science.” Then, in an equally brilliant stroke, they
shifted from lobbying school boards to getting themselves elected to them.
Predictably, they enjoyed great success in the Bible Belt girdling the Deep
Apart from making most real scientists gag every time they hear it, “Creation
Science” provided Creationists with the cachet of authority they had been
seeking—and needing—since Darwin so thoroughly sandbagged them. And, it has been
remarkably effective in shifting public opinion away from the scientific
position. Gallup Polls taken in 1982, 1993, 1997, and 1999 show the percentage
of Americans who believed “God created human beings in their present form at one
time within the past 10,000 years” was 44%, 47%, 44%, and 47% respectively. In a
recent Fox News/Opinion Dynamics poll asking people what they thought about
human origins, 15% said they accepted Darwinian evolution, 50% believed the
Biblical account, and 26% felt there was truth on both sides. The most
perceptive group might well have been the 9% who said they were not sure.
One could argue that those numbers are more of a comment on America’s failing
educational system than on the effectiveness of Creationist strategies. But in
any case, the Creationist cacophony reached a fever pitch in August of last
year, when the Kansas State Board of Education voted by a 6 to 4 margin to
eliminate from the state’s high school curricula the teaching of not only
biological evolution, which received virtually all media focus, but also of
geology’s “Old Earth” theories, and of cosmology’s “Big Bang” of universal
creation. The Kansas School Board went after science across the board.
That vote has been by far the high point of the modern Creationist offensive,
but courts are still loath to accept any comparison between so-called “Creation”
science and what is considered “real” science. In 1981 Arkansas and Louisiana
passed laws requiring that Creationism be taught in public schools. In 1982 a
U.S. District Court declared the Arkansas law unconstitutional. In 1987 the
Louisiana case made its way to the Supreme Court, which ruled Creationism was
essentially a religious explanation of life’s origins and therefore favored one
religion (Christianity) over others (Islam, Hindu, etc.).
As usual, after the 1987 defeat the Creationists went back to the drawing
board and devised yet another shrewd strategy, which has carried them through
the 1990’s and into this new millennium. They have transformed “Creation
Science” into theories they call “Sudden Appearance” outside the Bible Belt, or
“Intelligent Design” within it. Both versions carefully avoid referring to God
by name or to specific aspects of religion, but they strongly focus on the
Achilles heel of Darwinism, which is that all species thus far discovered in the
fossil record appear suddenly, whole and complete, males and females, leaving no
plausible way they could have evolved by Darwinian gradualism.
Fortunately for Darwinists, the legal protection provided by the Supreme
Court currently trumps the Achilles heel their rivals keep pointing out. But
that tide is running and running strong. Eventually it will turn on them the way
the tide of ignorance turned on Creationists when Darwin appeared, and then
again at the Monkey Trial. But as long as its legal protection remains intact,
Darwinist dogma is in no imminent danger of being confronted with Creationist
dogma in the nation’s classrooms. In fact, all this could soon be moot because
many school districts have responded to the pressures being applied to them by
refusing to teach either viewpoint, which will leave a large and serious hole in
the educational background of our next generation of students.
Despite the extreme volatility of these issues, and the immediate rancor
received after aligning with the “wrong” side in someone else’s view, any
objective analysis will conclude that both Darwinists and Creationists are wrong
to a significant degree. Indeed, how could it be otherwise when each can shoot
such gaping holes in the other? If either side was as correct as, say,
Einstein’s general theory of relativity, which — apart from occasional
dissonance with quantum mechanics — has faced no serious challenge since
Einstein revealed it to an awestruck world in 1915, there would be no issues to
debate: one side would be declared right, the other would be wrong, and that
would be that.
We all know “right” when we see it, just as we all should know “wrong.”
Anyone without a vested interest should be willing to accept that the earth is
vastly older than 6,000 years. Likewise, despite widespread proof of the
noticeable changes in body parts called for by microevolution, there is no
clearly definitive evidence for the innumerable species-into-higher-species
transformations required by macroevolution. If Charles Darwin were alive today
and could be presented with the facts that have accumulated since his death,
even he would have to admit his theory has turned out wrong.
Let us make the assertion, then, that both Darwinists and Creationists are
wrong to such a degree that their respective theories are ripe for overthrow. It
is simply a matter of time and circumstance before one or another piece of
evidence appears that is so clear in its particulars and so overwhelming in its
validity, both sides will have no choice but to lay down their bullhorns and
laptops and slink off into history’s dustbin, where so many other similarly
bankrupt theories have gone before them. But until that happens, what about
those who would choose to explore more objective and possibly more accurate
scenarios for the creation of life itself and human life in particular?
Because of their all-out, do-or-die strategies, Darwinists and Creationists
stand at opposite ends of a very wide intellectual spectrum, which leaves a huge
swath of middle ground available to anyone with the courage to explore it.
Moreover, the signposts along that middle ground are numerous and surprisingly
easy to negotiate. All that’s required is a willingness to see with open eyes
and to perceive with an open mind.
The basic Darwinist position regarding how life began is called “spontaneous
animation,” which J.W. Dawson complained about back in 1873. It is the idea that
life somehow springs into existence suddenly, all by itself, when proper
mixtures of organic and inorganic compounds are placed into proximity and
allowed to percolate their way across the immensely deep chasm between non-life
and life. Based on everything known about the technical aspects of that
process—from 1873 until now—it is quite safe to say spontaneous animation
doesn’t have the proverbial snowball’s chance of enduring.
Ignore the howls of protest echoing from far off to our right. Here on the
middle ground reality rules, and reality says there is simply no way even the
simplest life form—say, a sub-virus-sized microbe utilizing only a handful of
RNA/DNA components—could have pulled itself together from any conceivable brew
of chemical compounds and started functioning as a living entity. To cite just
one reason, no laboratory has ever found a way to coax lipids into forming
themselves into a functional cell membrane, which is essential for encasing any
living microbe. Then there is permeability, which would also have to be a part
of the mix so nutrients could be taken into the cell and wastes could be
Fred Hoyle, a brilliant English astronomer and mathematician, once offered
what has become the most cogent analogy for this process. He said it would be
comparable to “a tornado striking a junkyard and assembling a jetliner from the
materials therein.” This is because the complexity evident at even the tiniest
level of life is mind boggling beyond belief. In short, it could not and did not
happen, and anyone insisting otherwise is simply wrong, misguided, or terrified
of dealing with what its loss means to their world view.
So, if spontaneous animation is simply not possible, how does life come into
existence? How can it be? Here we must call on an old friend, Sherlock Holmes,
who was fond of saying that in any quest for truth one should first eliminate
whatever is flatly impossible. Whatever remains, however unlikely, will be the
truth. With spontaneous animation eliminated, that leaves only one other viable
alternative: intervention at some level by some entity or entities. (Ignore the
rousing cheers erupting far to our left.)
Before anyone in our group of middle-ground explorers goes jogging off toward
those would-be winners, understand that “entity or entities” does not mean “God”
in the anthropomorphic sense espoused by Creationists. It means some aspect or
aspects of our present reality that we do not officially acknowledge—yet—but
which nonetheless exist and act on us, and interact with us, in ways we are only
just beginning to understand.
As of today, all human beings are bound by three dimensions. We are born into
them, we live in them, and we die in them. During our lives we struggle to fit
all of our personal experiences into them. Some of us, however, undergo
experiences or receive insights which indicate other levels of reality might
exist. These don’t manifest in our usual corporeal (body) sense, but in purely
ethereal forms that nonetheless have enough substance to make them perceivable
by those locked into the three known dimensions.
For as woo-woo metaphysical as that might seem at first glimpse, please take
a closer look. There is a slowly emerging branch of “new” science which deals
with these other dimensions. Called hyperdimensional physics, it concerns itself
with devising and executing experiments that—however briefly—provide glimpses
into these other realms of reality. It is not greatly different from the
earliest days of Einstein’s time-and-motion studies, when he was trying to break
the 200-year-old academic straitjacket imposed by Newtonian physics. Now
Einstein’s revolutionary physics has become the straitjacket, and
hyperdimensional physics will eventually become the means to break out of it and
move humanity to a much higher level of awareness and understanding of true
Detailing these experiments is grist for another mill, but suffice to say
that string theorists are leading the charge. (Their subatomic “theory of
everything” requires ten or more new dimensions in order to be considered
valid.) In due course they and others will progress from the barest glimpses
being obtained at present to fully opening the doors to those other dimensions.
When they do, they are likely to find them populated by the kind of entity or
entities discussed earlier, beings who are not necessarily “God” with a capital
“G,” but rather “gods” with small “g’s.” Perhaps, even, the same plural “gods”
mentioned in Genesis (“Let us make man in our image, after our likeness.”) But
that, too, is grist for another mill. However, it does lead into an analysis of
how humanity came to be as it is.
The problem is simple: nobody in any conceivable position of power wants to
confront the truth about human origins. No scientist, no politician, no
clergyman could hope to preserve his or her authority—at whatever level—after
actively coming forward with the truth about this incendiary subject. They have
all seen colleagues “disappeared” from their ranks for stepping out of line, so
they know retribution is swift and sure.
As noted above, Creationists insist that God (a singular male now, reduced
from the genderless plurals of original Biblical text) created man in His own
image, after His own likeness. Well, if that’s true, He must have been having a
heck of a bad day, because we humans are a poorly designed species. True, we do
have highly capable brains, but for some reason we are only allowed to use a
relatively small portion of them. (Now we will hear frantic howls of protest
from the scientists off to our right, but ignore them. If 100 idiot savants can
access 100 different portions of their brains to perform their astounding
intellectual feats, then those same portions must be in our brains, too, but our
normalcy keeps us from being able to access them. Period.)
Morally we are a terrible mishmash of capacities, capable of evil incarnate
at one moment and love incarnate the next, while covering every range of emotion
in between. Physically we carry more than 4,000 genetic disorders, with each of
us averaging about 50 (some carry many more, some many less). New ones are found
on a regular basis. No other species has more than a handful of serious ones,
and none which kill 100% of carriers before they can reach maturity and
reproduce. We have dozens of those. So how did they get into us? Better yet, how
do they stay in us? If they are 100% fatal before reproduction is possible, how
could they possibly spread through our entire gene pool?
If we assume God was at His best the day He decided to create us, functioning
in His usual infallible mode, that gives Him no legitimate excuse for designing
us so poorly. Surely He could have given us no more physical disorders than,
say, our nearest genetic relatives, gorillas and chimps. A little albinism never
hurt any species, not those two or ours or dozens of others that carry it, so
why couldn’t He just leave it at that? What could have been the point of making
us much less genetically robust than all the other species we are supposed to be
There is no point to it, which is my point. It simply didn’t happen that way.
Now, let’s examine the Darwinist dogma that humans descended from primates
(chimps and gorillas) by gradually transitioning through a
four-million-year-long series of prehumans known as Australopithecines (Lucy,
etc.) and early Homos (Homo Habilis, Homo Erectus, etc.). Even though
Australopithecines undoubtedly walked upright (their kind would have left the
famous pair of bipedal tracks at Laetoli, Tanzania, 3.5 million years ago),
their skulls are so ape-like as to be ineligible as a possible human ancestor.
But let’s assume that somehow they bridged the evolutionary gap between
themselves and early Homos, which indeed are in the ballpark of physical
comparison with humans.
Notice that in any series of photos showing the skulls of the Homo prehumans,
little changes over time except the size of their brains, which increase by
leaps of roughly 200 cubic centimeters between species. Every bone in those
skulls is much denser and heavier than in humans; they all had missing
foreheads; huge brow ridges; large, round eye sockets holding nocturnal (night)
vision eyes; wide cheekbones; broad nasal passages beneath noses that had to
splay flat across their faces (no uplift of bone to support an off-the-face
nose); mouths that extend outward in the prognathous fashion; and no chins.
Each of those features is classic higher primate, and they predominate in the
fossil record until only 120,000 years ago, when genuinely human-looking
creatures—the Cro-Magnons—appear literally “overnight” (in geological terms),
with absolutely everything about them starkly different from their predecessors.
In fact, the list of those differences is so lengthy, it is safe to say humans
are not even primates! (More howls of outrage from off to our right, but please
keep to the middle ground and consider the evidence.)
According to our mitochondrial DNA, humans have existed as a distinct species
for only about 200,000 years, give or take several thousand. This creates quite
a problem for Darwinists because they contend we are part of the sequence
extending back through the Australopithecines at four million years ago.
Furthermore, we should follow directly after the Neanderthals, which followed
Homo Erectus. But now the Neanderthals, which existed for about 300,000 years
and overlapped Cro-Magnons by about 100,000 of those, have provided
mitochondrial samples which indicate they are not related closely enough to
humans to be direct ancestors. This compounds yet another serious transition
problem because human brains are on average 100 cubic centimeters smaller than
Neanderthal brains! How might that have happened if we are on a direct ancestral
line with them?
Anthropologists are now left with only Homo Erectus as a possible direct
ancestor for humans, and Erectus supposedly went extinct 300,000 years
ago—100,000 before we appeared. Obviously, something had to give here, and—as in
war—truth has been the first casualty. Recently anthropologists started
reevaluating Homo Erectus fossils from Indonesia and guess what? They are now
finding possible dates as early as 30,000 years ago, well beneath the 120,000
years ago Cro-Magnons first appeared in the fossil record. Such a surprise!
However, scientists still have to account for our “sudden” appearance and our
wide array of new traits never before seen among primates.
Understand this: humans are not primates! Yes, we do fit the technical
definition of having flexible hands and feet with five digits, but beyond that
there is no reasonable comparison to make. We don’t have primate bone density
(theirs is far more robust than ours) or muscular strength (pound for pound they
are 5 to 10 times stronger than we are); but we do have foreheads; minimal brow
ridges; small, rectangular-shaped eye sockets holding poor night-vision eyes;
narrow nasal passages with noses that protrude off our faces; mouths that are
flat rather than prognathous; we have chins; and we are bipedal.
Apart from those skeletal differences, we don’t have primate brains (that is
an understatement!), throats (we can’t eat or drink and breathe at the same
time; they can); voices (they can make loud calls, but we can modulate them into
the tiny pieces of sound that make up words); body covering (they all have pelts
of hair from head to toe, thick on the back and lighter on the front; we have no
pelt and our thickness pattern is reversed); we cool ourselves by sweating
profusely (they tend to pant, though some sweat lightly); we shed tears of
emotion (no other primate does); we do not regulate our salt intake (all other
primates do); we have a layer of fat of varying thickness attached to the
underside of our skin, which primates do not have; that fat layer prevents
wounds to our skin from healing as easily as wounds to primate skin; human
females have no estrus cycle, as do all primates; but the number one difference
between humans and primates is that humans have only 46 chromosomes while all
higher primates have 48!
This last fact is the clincher. You can’t lose two entire chromosomes (think
how much DNA that is!) from your supposedly “parent” species and somehow end up
better. And not just better, a light year better! It defies logic to the point
where any reasonable person should be willing to concede that something
“special” happened in the case of humans, something well beyond the ordinary
processes of life on Earth. And it did. The “missing” chromosomes, it turns out,
are not actually missing. The second and third chromosomes in higher primates
have somehow been spliced together (there is no other term for it) by an utterly
inexplicable—some might call it “miraculous”— technique.
Once again, the only plausible explanation seems to be intervention. But by
whom? The same hyperdimensional entity or entities that might have created life
in the first place? Not necessarily. Certainly that would have to be considered
as a possibility, but humans were probably a breeze to create relative to
initiating life and engineering all subsequent forms. That leaves room for
three-dimensional assistance. In other words, we could have been created as we
are by other three-dimensional beings who for reasons of their own decided to
make us “in their own image, after their own likeness.”
Accepting such a heretical explanation would certainly go a long way toward
resolving these anomalies about humanity: (1) our many inexplicable differences
from primates; (2) our all-too-sudden appearance in the fossil record; (3) our
much-too-recent speciation; (4) our lack of a clear ancestor species; (5) our
astounding number of genetic flaws; and (6) the unmistakable splicing done to
our second and third chromosomes. The last two are, not surprisingly, hallmarks
of hybridization and genetic manipulation, which is exactly how human origins
were accounted for by—get this—the ancient Sumerians! We began this essay with
them, and now we will end it with them.
As was noted at the beginning, the Sumerians were Earth’s first great
culture, emerging fully-formed from the Stone Age around 6,000 years ago (shades
of Bishop Ussher!). They utilized over 100 of the “firsts” we now attribute to a
high civilization, among them the first writing (cuneiform), which they
inscribed on clay tablets that were fired in kilns (another first) into stone.
Thousands of those tablets have survived, and in many of them the Sumerians
describe a period wherein hundreds of three-dimensional “gods” (with a small
“g”) came to Earth from another planet orbiting in a long clockwise ellipse
around the Sun rather than in a counterclockwise circle like the other planets.
While on Earth, those vastly superior beings decided to create for themselves
a group of slaves and servants they would call Adamu. It was written in stone
over 4,000 years ago (1,500 years before the Old Testament) that those “gods”
agreed to “make the Adamu in our own image, after our own likeness.” They did it
by processes that sound remarkably like genetic engineering, in vitro
fertilization, and hybridization. Perhaps most remarkable of all, they said they
did it around 200,000 years ago, precisely when our mitochondrial DNA—against
all expectations—says we originate as a species!
When the task of creating the Adamu was complete, the first of them were put
to work in the Lower World of deep, hot mineshafts in southern Africa, where—not
to put too fine a point on it—nearly every modern authority agrees that
humankind originated. Eventually a surplus of slaves and servants became
available, so that group was sent to work in the lush Upper World home of our
alleged creators, which they called the E.Din (“home of the righteous ones”)
located in the Tigris-Euphrates Valley of modern Iraq.
All went well until the end of the last Ice Age, around 15,000 years ago,
when the gods realized the immense icecap covering Antarctica was rapidly
melting, and at some point in the future its massive edges would drop into the
surrounding oceans and cause gigantic tidal waves to sweep across Earth’s
lowlands, where their cities were. Because all Adamu could not be saved, several
of the best were chosen to survive in a specially constructed boat able to
withstand the immense tsunamis that were certain to strike.
When the time came, the gods boarded their spacecraft and lifted off into the
heavens, from where they watched the devastation below and were shocked by the
level of destruction. But when the waters receded enough for them to come down
and land in the Zagros Mountain highlands, above the now mud- and sludge-covered
E.Din valley, they joined the surviving Adamu to begin rebuilding their
Again, not to put too fine a point on it, but most scholars now agree that
modern civilization (settlements, farming, etc.) inexplicably began around
12,000 years ago in the Zagros Mountain highlands, where settlements would be
extraordinarily difficult to build and maintain, and where terrace farming in
poorly watered, sparse mountain soil (not to mention arid weather) would be
vastly more demanding than in any fertile, well-watered lowlands. Yet the same
scholars do not accept that there was any kind of worldwide flood event which
may have caused a prior civilization to have to reboot itself in dry highlands.
In general, modern scholars scoff at all similar correlations to the Sumerian
texts, considering them nothing more than an extended series of coincidences.
They insist the Sumerians were merely being “overly creative” while forming
incredibly sophisticated, richly detailed “myths.” After all, the myriad
wondrous things they described over four thousand years ago simply could not be
an accurate record of their “primitive” reality.
Or could it?
LIFE'S TRUE BEGINNINGS
by Lloyd Pye
This was published in England's Quest Magazine in April 1999.
Framing The Picture
How did life begin on Earth? More intellectual and literal blood has been
shed and spilled attempting to answer this question than any other in any aspect
of science or religion. Why? Because the answer, if it could be determined
beyond doubt, would reveal to us the deepest meanings behind ourselves and all
that we see around us. More importantly, it would demolish once and for all the
thorny tangle of conscious and unconscious thought and belief that causes most
of the bloodshed.
At present there are only two socially acceptable explanations for how life
has come to be on Earth. Science insists it has developed by entirely natural
means, using only the materials at hand on the early planet, with no help from
any outside source, whether that source be divine or extraterrestrial. Religion
insists with equal fervor that life was brought into existence whole and
complete by a divine Creator called by different names by the world's various
sects. Between these two diametrically opposed viewpoints there is no overlap,
no common ground where negotiation might be undertaken. Each considers its own
position to be totally correct and the other totally wrong, a certainty
bolstered by the fact that each can blow gaping holes in the logic/dogma of the
Science is quick to point to the overwhelming technical proofs that life
could not, and indeed did not, appear whole and complete within the restricted
time frame outlined in the Biblical account. Of course, people of faith are
immune to arguments based on fact or logic. Faith requires that they accept the
Biblical account no matter how dissonant it might be with reality. Besides, they
can show that not a shred of tangible evidence exists to support the notion that
any species can transmute itself into another species given enough time and
enough positive genetic mutations, which is the bedrock of Charles Darwin's
theory of incremental evolution, or "gradualism."
In the early 1800's Darwin visited the Galapagos Islands and noticed certain
species had developed distinct adaptations for dealing with various
environmental niches found there. Finch beaks were modified for eating fruit,
insects, and seeds; tortoise shells were notched and unnotched for high-bush
browsing and low-bush browsing. Every variation clearly remained part of the
same root stock--finches remained finches, tortoises remained tortoises--but
those obvious modifications in isolated body parts led Darwin to the logical
assumption that entire bodies could change in the same way over vastly more
time. Voila! Gradualism was conceived and, after gestating nearly three decades,
was birthed in 1859 with the publication of the landmark On The Origin Of
Species. Since then Darwin and his work have been topics of intense, usually
acrimonious debate between science and religion.
The irony of a two-party political system whose members spend the majority of
their time shooting holes in each other’s policies is that it becomes
abundantly clear to everyone beyond the fray that neither side knows what the
hell it is talking about. Yet those standing outside the science-religion fray
do not grow belligerent and say, "You're both wrong. An idiot can see that. Find
another explanation." No! In this emotionally charged atmosphere nearly everyone
seems compelled to choose one side or the other, as if seeking a more objective
middle ground would somehow cause instant annihilation. Such is the
psychological toll wrought on all of us by the take-no-prisoners attitude of the
two sides battling for our hearts and minds regarding this issue.
Facts Will Be Facts
Because those of faith insist on being immune to arguments based on facts,
they remove themselves from serious discussions of how life might have actually
come to be on Earth. So if anyone reading this has a world view based on divine
revelation, stop here and move on to something else. You will not like (to say
the least!) what you are about to read. Nor, for that matter, will those who
believe what science postulates is beyond any valid doubt. As it turns out, and
as was noted above, neither side in this two-party system knows what the hell it
is talking about.
To move ahead, we must assign a name to those who believe life spontaneously
sprang into existence from a mass of inorganic chemicals floating about in the
early Earth's prebiotic seas. Let's call them "Darwinists," a term often used
for that purpose. Darwinists have dealt themselves a difficult hand to play
because those prebiotic seas had to exist at a certain degree of coolness for
the inorganic chemicals floating in them to bind together into complex
molecules. Anyone who has taken high school chemistry knows that one of the best
ways to break chemical bonds is to heat them.
Given that well-known reality, Darwinists quickly postulated that the first
spark of life would no doubt have ignited itself sometime after the continental
threshold was reached around 2.5 billion years ago. At that point land would
have existed as land and seas would have existed as seas, though not in nearly
the same shapes we know them today. But the water in those seas would have been
cool enough to allow the chemical chain reactions required by "spontaneous
animation." So among Darwinists there arose a broad consensus that the
spontaneous animation of life had to have occurred (again, because they do not
allow for the possibility of outside intervention, divine or extraterrestrial),
and it had to have occurred no earlier than the continental threshold of 2.5
billion years ago.
These assumptions were believed and taught worldwide with a fervor that
leaves religious fundamentalists green with envy. Furthermore, they were taught
as facts because that is what science inevitably does. It reaches a consensus
about a set of assumptions in a field it has not fully mastered, then those
assumptions are believed as dogma and taught as facts until the real facts
become known. Sometimes such consensus "facts" endure for a short time (Isaac
Newton's assumption that the speed of light was a relative measure lasted only
200 years), while others endure like barnacles on the underside of our awareness
(the universe doggedly expands beyond every finite measure given for it).
In the same way Newton's fluctuating speed of light was overturned by Albert
Einstein's theory of relativity, the continental threshold origin of life was
blown out of the water, so to speak, by discoveries in the 1970's that indicated
life's origins were much older than anticipated. So old, in fact, it went back
nearly to the point of coalition, 4.5 billion years ago, when the Sun had
ignited and the protoplanets had taken the general shapes and positions they
maintain today. Ultimately, 4.0 billion years became the new starting point for
life on Earth, based on fossilized stromatolites discovered in Australia that
dated to 3.6 billion years old.
For Darwinists that meant going from the frying pan into the fire, literally,
because at 4.0 billion years ago the proto-Earth was nothing but a seething
cauldron of lava, cooling lava, and steam, about as far from an incubator for
incipient life as could be imagined. In short, right out of the gate, at the
first crack of the bat, Charles Darwin was, as they say in the south, a blowed-up
Limbo Of The Lost
The fossilized stromatolites discovered in Australia had been produced by the
dead bodies of billions of prokaryotic bacteria, the very first life forms known
to exist on the planet. They are also by far the simplest, with no nucleus to
contain their DNA. Yet in relative terms prokaryotes are not simple at all. They
are dozens of times larger than a typical virus, with hundreds of strands of DNA
instead of the five to ten of the simplest viruses. So it is clear that
prokaryotes are extremely sophisticated creatures relative to what one would
assume to be the very first self-animated life form, which can plausibly be
imagined as even smaller than the smallest virus.
(By the way, viruses do not figure into this scenario because they are not
technically "alive" in the classic sense. To be fully alive means having the
ability to take nourishment from the immediate environment, turn that
nourishment into energy, expel waste, and reproduce indefinitely. Viruses need a
living host to flourish, though they can and do reproduce themselves when
ensconced in a suitable host. So it seems safe to assume hosts precede viruses
in every case.)
Needless to say, the discovery of fossilized prokaryotes at 3.6 billion years
ago left scientists reeling. However, because so many of their pet theories had
been overturned in the past, they knew how to react without panic or stridency.
They made a collective decision to just whistle in the dark and move on as if
nothing had changed. And nothing did. No textbooks were rewritten to accommodate
the new discovery. Teachers continued to teach the spontaneous animation theory
as they had been doing for decades. The stromatolites were consigned to the
eerie limbo where all OOPARTS (out-of-place artifacts) dwell, while scientists
edgily anticipated the next bombshell.
They didn't have to wait long. In the late 1980's a biologist named Carl
Woese discovered that not only did life appear on Earth in the form of
prokaryotes at around 4.0 billion years ago, there was more than one kind! Woese
found that what had always been considered a single creature was in fact two
distinct types he named archaea and true bacteria. This unexpected, astounding
discovery made one thing clear beyond any shadow of doubt: Life could not
possibly have evolved on Earth. For it to appear as early as it did in the
fossil record, and to consist of two distinct and relatively sophisticated types
of bacteria, meant spontaneous animation flatly did not occur.
This discovery has been met with the same resounding silence as the
stromatolite discovery. No textbooks have been rewritten to accommodate it. No
teachers have changed what they are teaching. If you can find a high school
biology teacher that religious fundamentalists have not yet terrorized into
silence, go to their classroom and you will find them blithely teaching that
spontaneous animation is how life came to be on Earth. Mention the words "stromatolite"
or "prokaryote" and you will get frowns of confusion from teacher and students
alike. For all intents and purposes this is unknown information, withheld from
those who most need to know about it because it does not fit the currently
accepted paradigm built around Charles Darwin's besieged theory of gradualism.
The ongoing, relentless assaults on gradualism by religious fundamentalists
is the principle reason scientists can't afford to disseminate these truths
through teaching. If fundamentalists would keep their opinions and theories
inside churches, where they belong, scientists would be far more able (if not
inclined) to acknowledge where reality does not coincide with their own
theories. But because fundamentalists stand so closely behind them, loudly
banging on the doors of their own bailiwick, schools, scientists have no choice
but to keep them at bay by any means possible, which includes propping up an
explanation for life's origins that has been bankrupt for more than two decades.
Another reason scientists resist disseminating the truth is that it would so
profoundly change the financial landscape for many of them. Consider the
millions and billions of tax dollars and foundation grants that are spent each
year trying to answer one question: Does life exist beyond Earth? The reality of
two types of prokaryotes appearing suddenly, virtually overnight, at around 4.0
billion years ago provides overwhelming testimony that the answer is "Yes!"
Clearly life could not have spontaneously animated from inorganic chemicals in
seas comprised of seething lava rather than relatively cool water. So billions
of dollars of funding would vanish if scientists ever openly conceded that life
must have come to Earth from somewhere else because it obviously could not have
A third reason scientists avoid disseminating this knowledge is that
spontaneous animation is a fundamental tenet of their corollary theory of human
evolution. As with life in general, scientists insist that humanity is a product
of the same protracted series of gradual genetic mutations that they feel
produced every living thing on Earth. And, again, all this has been done by
natural processes within the confines of the planet, with no outside
intervention of any kind, divine or extraterrestrial. So, if spontaneous
animation goes out the window, then the dreaded specter of outside intervention
comes slithering in to take its place, and that idea is so anathema to
scientists they would rather deal with the myriad embarrassments caused by their
blowed-up icon and his clearly bankrupt theory.
So What Is The Answer?
Life came to Earth from somewhere else--period. It came to Earth whole and
complete, in large volume, and in two forms that were invulnerable to the most
hostile environments imaginable. Given those proven, undeniable realities, it is
time to make the frightening mental leap that few if any scientists or
theologians have been willing or able to make: Life was seeded here! There ...
it's on the table ... life was seeded here.... The Earth hasn't split open.
Lightening bolts have not rained down. Time marches on. It seems safe to discuss
the idea further.
If life was actually seeded here, how might that have happened? By
accident....or (hushed whisper) deliberately? Well, the idea of accidental
seeding has been explored in considerable detail by a surprising number of
non-mainstream thinkers and even by a few credentialed scientists (British
astronomer Fred Hoyle being perhaps the most notable). The "accidental seeding"
theory is called panspermia, and the idea behind it is that bacterial life came
to Earth on comets or asteroids arriving from planets where it had existed
before they exploded and sent pieces hurtling through space to collide some
millennia later with our just-forming planet.
A variation of this theory is called directed panspermia, which replaces
comets and asteroids with capsules launched by alien civilizations to traverse
space for millennia and deliberately home in on our just-forming planet.
However, the idea of conscious direction from any source beyond the confines of
Earth is as abhorrent to science as ever, so directed panspermia has received
little better than polite derision from the establishment. But for as blatantly
as undirected panspermia defies the scientific tenet that all of life begins and
ends within the confines of Earth, it is marginally acceptable as an alternative
possibility. There have even been serious, ongoing attempts to try to determine
if the raw materials for life might be found in comets.
The point to note here is that no one wants to step up to the plate and
suggest the obvious, which is that some entity or entities from somewhere beyond
our solar system came here when it was barely formed and for whatever reason
decided to seed it with two kinds of prokaryotes, the hardiest forms of bacteria
we are aware of and, for all we know, are creatures purposefully designed to be
capable of flourishing in absolutely any environment in the universe.
(Understand that prokaryotes exist today just as they did 4.0 billion years ago
... unchanged, indestructible, microscopic terminators with the unique ability
to turn any hell into a heaven. But more about that in a moment.)
If we take the suggested leap and accept the notion of directed-at-the-scene
panspermia, we are then confronted with a plethora of follow-up questions. Were
all of the planets seeded, or just Earth? Why Earth? Why when it was a seething
cauldron? Why not a couple billion years later, when it was cooled off? Good
questions all, and many more like them can be construed. But they all lead away
from the fundamental issue of why anyone or (to be fair) anything would want to
bring life here in the first place, whether to the proto-Earth or to any other
protoplanet? And this brings us to the kicker, a question few of us are
comfortable contemplating: Is Earth being deliberately terraformed?
Welcome To The Ant Farm
The concept of terraforming does indeed conjure up images from the recent
movie "Antz." Nevertheless, for all we know that is exactly what we humans--and
all other life forms, for that matter--are, players on a stage that seems
immense to us, but (visualize the camera pulling back at the end of "Antz") in
reality is just a tiny orb swirling through the vastness of a seemingly infinite
universe. An unsettling and even unlikely scenario, but one that has to be
addressed. Well, so what? What if we are just bit players in a cosmic movie that
has been filming for 4.0 billion years? As long as we are left alone to do our
work and live our lives in relative peace, where is the harm in it?
Is this fantastic notion really possible? Is it even remotely plausible?
Consider the facts as we know them to be, not what we are misled into believing
by those we trust to correctly inform us. The simple truth is that life came to
our planet when it (Earth) had no business hosting anything but a galactic-level
marshmallow roast. The life forms that were brought, the two prokaryotes, just
happen to be the simplest and most durable creatures we are aware of. And, most
important of all, they have the unique ability to produce oxygen as a result of
their metabolic processes.
Why oxygen? Why is that important? Because without an oxygen-based atmosphere
life as we currently know it is impossible. Of course, anaerobic organisms live
perfectly well without it, but they would not make good neighbors or dinner
companions. No, oxygen is essential for complex life as we know it, and quite
possibly is necessary for higher life forms everywhere. If that is the case, if
oxygen is the key ingredient for life throughout the universe, then from a
terraformer's perspective bringing a load of prokaryotes to this solar system
4.0 billion years ago begins to make a lot of sense.
Let's put ourselves in their shoes (or whatever they wear) for a moment. They
are a few million or even a few billion years into their life cycle as a
species. Space and time mean nothing to them. Traversing the universe is like a
drive across Texas to us...a bit long but easily doable. So as they travel
around they make it a point to look for likely places to establish life, and 4.0
billion years ago they spot a solar system (in this case ours) forming off their
port side. They pull a hard left and take it all in. At that point every
protoplanet is as much a seething cauldron as the proto-Earth, so they sprinkle
prokaryotes on all of them in the hope that one or more will allow them to
What the terraformers know is that if the prokaryotes ultimately prevail,
then over time trillions of them will produce enough oxygen to, first, turn all
of the cooling planet's free iron into iron-oxide (rust). Once that is
done...after, say, a billion years (which, remember, means nothing to the
terraformers)...oxygen produced by the prokaryotes will be free to start
saturating the waters of the seas and the atmosphere above. When enough of that
saturation occurs (say, another billion years), the terraformers can begin to
introduce increasingly more complex life forms to the planet.
This might include, for example, eukaryotes, Earth's second life form,
another single-celled bacteria which clearly appeared (rather than evolved) just
as suddenly as the prokaryotes at (surprise!) around 2.0 billion years ago.
Eukaryotes are distinctive because they are the first life form with a nucleus,
which is a hallmark of all Earth life except prokaryotes. We humans are
eukaryotic creatures. But those second immigrants (which, like prokaryotes,
exist today just as they did when they arrived) were much larger than their
predecessors, more fragile, and more efficient at producing oxygen.
After establishing the first portion of their program, the terraformers wait
patiently while the protoplanet cools enough for "real" life forms to be
introduced. When the time is right, starting at around half a billion years ago,
higher life forms are introduced by means of what today is called the "Cambrian
Explosion." Thousands of highly complex forms appear virtually overnight, males
and females, predators and prey, looking like nothing alive at present. This is
what actually happened.
The terraformers continue to monitor their project. They notice Earth suffers
periodic catastrophes that eliminate 50% to 90% of all higher life forms. (Such
mass extinction events have in fact occurred five times, the last being the
Cretaceous extinction of 65 million years ago, which wiped out the dinosaurs).
They wait a few thousand years after each event while the planet regains its
biotic equilibrium, then they restock it with new plants and animals that can
make their way in the post-catastrophe environment. (This, too, is actually
borne out by the fossil record, which scientists try to explain away with a
specious addendum to Darwinism called "punctuated equilibrium.")
For as outrageous as the above scenario might seem at first glance, it does
account for the real, true, literal evidence much better than either Darwinism
or Creationism ever have...or ever will. This produces the bitterest irony of
the entire debate. With pillars of concrete evidence supporting outside
intervention as the modus for life's origins on Earth, the concept is ignored to
the point of suppression in both scientific or religious circles. This is, of
course, understandable, because to discuss it openly might give it a credibility
neither side can afford at present. Both have their hands quite full maintaining
the battle against each other, so the last thing either side wants or needs is a
third wheel trying to crash their party. However, that third wheel has arrived
and is rolling their way.
EARLIEST HUMAN ANCESTOR? NOT LIKELY!
by Lloyd Pye
A STAR IS BORN
Media everywhere have recently carried banner stories about the discovery in
Ethiopia of fossil bones deemed the oldest yet found of the primate species that
eventually evolved into humans. Worldwide news outlets for TV, print, radio, and
wire have trumpeted the inexorable march of science back to the moment when the
so-called “common ancestor” of apes and humans will eventually be unearthed.
Such reports are given as if no other result is remotely possible; it is simply
a matter of time and circumstance. But is it?
The new fossils average 5.5 million years old, neatly fitting within the
range of 5 to 7 million years ago that is the accepted window for when humans
and apes diverged from the common ancestor. However, that window is heavily
fogged with assumptions rather than provable calculations. Geneticists have made
broad assumptions about mutation rates in the mitochondrial DNA of great apes,
which just happens to dovetail in the window with equally broad assumptions made
by physical anthropologists.
The anthropological estimate begins with an astonishing string of
human-shaped footprints tracked across volcanic ash 3.5 million years ago in
what today is Laetoli, Tanzania. Upright bipedal walking is considered a
hallmark of humanity and all of its predecessors, so if it was firmly
established at 3.5 million years ago, the process had to begin at least 2 or 3
million years earlier. Add 2 to 3 million years to 3.5 million and you arrive at
5.5 to 6.5 million years ago. Tack on another half million front and back for
coverage and presto! Primates started becoming bipedal 5 to 7 million years ago.
THE DOGMA SHUFFLE
Despite howls of protest to the contrary, that is usually how scientists
operate. They will arrive at a poorly supported conclusion because it seems
logical based on what they know at a certain point in time. Rather than make
that conclusion provisional, which should be automatic because science is
nothing more than a long series of corrected mistakes, their assumption becomes
dogma that is strenuously defended until a new conclusion is shoved down the
unwilling throats of the specialists responsible for perpetuating the dogma.
A clear example occurred decades ago when scientists arrived at the seemingly
obvious conclusion that humanity was propelled to its destiny by a radical
change in climate. The forest homes of the early great apes—and the supposed
common ancestor of humanity—must have suffered a severe blight, forcing some
primates to begin making their way out onto the savannas that replaced the
forests. In the process, increased hand dexterity would become essential. Tools
and weapons would have to be held or carried, as well as food and possibly
infants, although this last notion was and remains a point of contention.
Though lacking truly opposable thumbs, nonhuman primate infants have enough
strength and dexterity in their hands and feet to cling to their mothers’ body
hair from the first few moments after birth. Human babies must be carried almost
constantly for a full year and, to be safe, for ample parts of another. Nobody
can agree on when — much less why — such a severely negative physiological trait
would start to manifest, but one assumption is that it started when body hair
began to diminish and/or feet began losing the ability to grasp.
Another unsolved strategic puzzle is why prehumans would relinquish so much
physical strength (pound for pound all primates—even monkeys—are 5 to 10 times
stronger than humans) during the transition onto the savanna. That makes even
less sense than giving up the clinging ability of infants. However, as infants’
hands and feet lost traction, adult hands became ever more dexterous and their
feet became ever more adapted to upright locomotion, which—though
inexplicable—must have been a worthwhile trade-off.
THE AGONY OF THE FEET
Whatever the reasons, as prehuman hands were utilized for other tasks, they
could no longer be used for locomotion, which necessitated moving more and more
on the rear limbs alone. In short, so the theorizing went, the more we used our
hands, the more we were forced to stand upright. Furthermore, as we assumed both
of those radical changes in primate lifestyle, our brains grew larger to
accommodate all of the unique new tasks required to succeed in the new
environment. It was a conveniently reciprocal spiral of ever-increasing
sophistication and capability that led (or drove) us to our destiny.
That dogma stayed in place until 1974, when the famous fossil hominid “Lucy”
was discovered in a dry desert arroyo in Ethiopia. Dated reliably at 3.2 million
years ago, Lucy clearly walked upright as a fully functioning biped. There was
no doubt about it. Problem was, she had the head and brain of a chimpanzee. In
fact, she was little more than an upright walking chimpanzee, and a small one at
that (3.5 feet tall). Overnight, science lost its ability to insist that
brainpower had to increase, ipso facto, with the coequal modifications of hand
freedom and bipedality.
Lucy created other problems, too. Her arms seemed a bit longer than they
should have been in an incipient human, although lingering echoes of chimphood
were acceptable. A further echo was her hands, which had thumbs that were not
very opposable, and fingers that were longer and curved a bit more than seemed
appropriate. Vaguely ape-like hands atop markedly human-like feet did not set
well with the established dogma. Then there was the problem of where she was
found, in an area that when she died was primarily wooded forest. That
confounded the dogmatists because forests rarely created fossils, while
prehumans were supposed to be found on savannas, which did produce fossils.
Lucy and several others of her kind (Australopithecus afarensis) forced
anthropologists to accept that primate brain modification had to be caused by
something other than hand and foot modification. However, it still made sense to
assume that any primate moving from forest to savanna had to use its hands to
hold and carry, and its feet to walk exclusively upright. Five years after Lucy,
the Laetoli tracks cemented that assumption, showing perfect bipedality on a
flat, open area—possibly a savanna—at 3.5 million years ago. Anthropologists
heaved a sigh of relief and considered Lucy’s woodland home a fluke.
Then, in 1994, a new fossil group called Ardipithecus ramidus was found in
Ethiopia and dated at 4.4 million years ago. Though 1.2 million years older than
afarensis, ramidus was every bit as bipedal, giving no sign of transition
between them. This trashed the idea that bipedality was an evolutionary lynchpin
for humanity. Worse, ramidus died — and apparently lived — in an area every bit
as forested as afarensis. Yikes!
[Like most of you reading this, I, too, deplore anthropology’s overblown
nomenclature. Would that they could be as succinct as astronomers. The beginning
of everything? The Big Bang. A big red star? A Red Giant. A small white star? A
White Dwarf. And so on…. Unfortunately, anthropologists earn their way making
mountains of suppositions out of molehills of data, the sparsity of which they
obfuscate with pedagogic pedantry.]
In 1995, with anthropologists still reeling from the “ramidus problem,” two
separate groups of fossils were found in Kenya. At about 4.0 million years old,
Australopithecus namensis was only 400,000 years younger than ramidus, but they
were different enough to warrant inclusion in a separate genus, the one that
held Lucy and her ilk. Like afarensis and ramidus, anamensis was a fully erect
biped, which was another stake in the heart of bipedality as a construct of
prehuman evolution. That was bad enough. But despite its location distantly
south of northern Ethiopia, anamensis also lived and died in a forest.
Now comes the much ballyhooed discovery of Ardipithecus kadabba, 5.5 million
years old and 1.1 million years older than ramidus. And guess what? Kadabba was
also found in what was once heavy forest! That leaves anthropologists everywhere
hearing the first chilling notes of the Fat Lady warming up. Why? Because
prehumans could not possibly have evolved or developed, or whatever they did, in
forests. If that were true there would be absolutely no reason for them to
abandon established great ape behavior. Great apes have forest living wired to
an extreme, and they have had it wired for over 20 million years, back to when
their ancestors first appeared in the Miocene epoch.
THE SKELETON IN THE CLOSET
Just as the public did with ramidus, they will overlook or disregard the new
anomalous forested environment, and eventually anthropologists will be back to
business as usual. Everyone—scientists and public alike—will resume accepting
the idea that some small group of quadrupedal primates left the forests to live
on the savannas of their time and thereby became human. It could not possibly
have happened any other way. Humanity could not have evolved or developed in a
forest because we are physically unsuited to it. So what could make our earliest
ancestors do so? What could make them stand upright?
Nothing. That’s not a choice any sane creature would make. Forest dwelling
primates — even those like gorillas, which dwell primarily on the forest floor —
would not forego the ability to scamper up trees, or easily move from tree to
tree, without an overwhelmingly compelling reason, and no such reason could ever
exist in the forest itself. Only a radical, extended change in environment could
warrant the equally radical and extensive physical transformation from quadruped
to biped. And if no evidence for such an environmental change is discernable
over two million years of extremely early bipedality, right back to the alleged
point of divergence between great apes and prehumans, then anthropology is
facing a quintessential dilemma: How to explain such an inexplicable
Surprisingly, there is an easy and simple solution. Unfortunately, it is not
in the ballpark of a wide range of currently accepted dogmas within and outside
of anthropology, and in this sensitive area of knowledge anthropologists are the
gatekeepers, tasked with making certain the rest of us aren’t exposed to it.
Why? Because, in the immortal words of Jack Nicholson, they don’t believe we can
handle it. Well, I think all but the most hidebound of us can, so for better or
worse, here it is. Read on if you want to know the truth.
ONCE UPON A TIME
It begins back in the Miocene epoch, mentioned earlier, which extended for
roughly 20 million years (25 to 5 million years ago). Over the course of those
20 million years, more than 50 species of tailless primate apes were known to
roam the planet. Those 50+ types have been classified into 20 genera (groups)
with names like Proconsul, Kenyapithecus, Dryopithecus, Sivapithecus, and most
familiar to a general audience, Gigantopithecus. Okay, show of hands….how many
reading this have heard of the Miocene and of the dozens of apes that lived
during the course of its 20 million years? Not many, eh?
The reason is because it presents a painful embarrassment to anyone who
supports the notion of Darwinian evolution, which definitely includes mainstream
anthropologists. Now, I am not a Creationist, so please don’t cop any attitude
because of the preceding sentence. It’s true and it must be stated. Evolution
dictates there should have been one, then two, then three, then four, etc., as
the magic of speciation produced more and more tailless primates to live
wherever they could adapt themselves to fit. Unfortunately for anthropologists,
the exact opposite occurred. Dozens came into existence during the Miocene, most
quite suddenly, with no obvious precursors, which is difficult enough to
explain. But then nearly all went extinct, leaving only six to thrive: two types
of gorilla, two types of chimp, gibbons and orangutans. Why? How? Is that a
No, it’s not. Miocene apes were ubiquitous, being found throughout Asia,
Africa, and Europe. They came in all sizes, from two-foot-tall elves to ten-foot
giants. In short, the planet was theirs to do with as they pleased. Their
natural predators would have been few, and the larger ones would have had little
to fear from any other creature, even big cats. But since Miocene apes lived
almost exclusively in forests, and the big cats lived almost exclusively on
savannas, their paths seldom crossed. So for the most part, and as with great
apes today, the majority of Miocene apes were masters of all they surveyed.
AGAIN UPON THE SAME TIME
Imagine the situation as it was….dozens of tailless ape species living
throughout the planet’s forests and in some cases jungles (the dry kind, not
swamps), microevolving to whatever degree necessary to make their lives
comfortable wherever they were. Given that scenario, what would cause all but
six types to go extinct? Well….nothing, really. In the past 20 million years
there have been no global catastrophes. The last of those was 65 million years
ago, when the dinosaurs were wiped out. So apart from enduring migrations
necessitated by the slow waxing and waning of Ice Ages, all Miocene apes would
have been free to pursue their individual destinies in relative peace and
This brings us to the crux of the anthropological dilemma: How to explain the
loss of so many Miocene apes when there is no logical or biologically acceptable
reason for it? They should still be with us, living in the forests and jungles
that sustained them for 20 million years. Species don’t go extinct on a whim,
they endure at almost any cost. They are especially hard to eradicate if they
are generalists not locked into a specific habitat, which many Miocene apes seem
to have avoided. In fact, several were apparently such efficient generalists, it
makes more biological sense for them to have survived into our own time than
ecological specialists like gorillas, chimps, gibbons, and orangutans.
As it happens, science does not know a tremendous amount about the bodies of
Miocene apes. Most of the categories have been classified solely by skulls,
skull parts, and teeth, which are the most durable bones in primate bodies. For
example, the best known of the Miocene apes, Gigantopithecus, is classified by
only four jawbones and many hundreds of teeth. Nevertheless, that is enough to
designate them as the physical giants they were, and so it goes with many
others. Among those others, enough fragments of arm and leg bones have been
recovered to show their limbs were surprisingly balanced in length.
Quadrupeds have arms that are distinctly longer than their legs to make
moving on all fours graceful and easy. Humans have arms that are distinctly
shorter than their legs. Some Miocene apes have arms that are equal in length to
their legs. Nonetheless, every Miocene ape is considered to have been a
quadruped. On the face of it, this would seem to warrant another, perhaps more
inclusive or flexible interpretation. Unfortunately, we can’t have one because
anthropologists insist that the six quadrupeds living among us today are fully
representative of all Miocene categories. That makes sense, doesn’t it?
I hope by now you can see where this is heading. There is absolutely no way
anyone can say for certain that all Miocene apes were quadrupeds. Clearly some
of them were, but it is equally possible that some were bipeds as early as 20
million years ago. That is based on established facts and undeniable logic, but
it will be strenuously disputed by virtually all anthropologists who might be
confronted with it. In fact, if you want to see someone get their knickers in a
twist, as the British like to say, suggest to an anthropologist that several of
the Miocene apes might well have been bipeds. If you accept this challenge, step
back, plug your ears, and brace yourself. You are in for a tongue lashing.
The problem for anthropologists is that if they acknowledge the distinct
possibility that some of the 50+ species of tailless Miocene apes might indeed
have been bipedal, they are opening the door to a possibility so embarrassing
that they don’t even like to dream about it, much less actively consider it.
That possibility—in case you haven’t guessed it by now—is hominoids in general
and bigfoot/sasquatch in particular. If there are words more able to infuriate
diehard, hardcore bone peddlers, I don’t know what they are.
Despite the vitriol and invective hurled on hominoids by all but a handful of
certified anthropologists, the historical record and biological reality dictate
that they stand a much greater chance of existing than of not existing. If we
make the assumption that they may have gotten their start in forests 20 million
years ago, and prospered in them for all those millennia, it establishes a solid
possibility that anthropologists are looking in the wrong direction trying to
figure out the lineage of kaddaba, ramidus, Lucy, and every other so-called
prehuman through Neanderthals — none of which look anything like true humans.
Instead of looking forward to what such creatures might have developed into,
perhaps anthropologists would be better served to look back in time, into the
Miocene, to try to determine where they might have come from. Which Miocene ape
might have been the ancestor of Kaddaba? Which might have been the ancestor of
Ramidus? Which of Lucy? And, most blood-chilling of all, which one might have
been the ancestor of bigfoot? Has anybody thought it might be….well…..Gigantopithecus,
by any chance? A creature that by the undisputed size of its teeth and jaws had
to stand in the range of ten feet or so?
Sounds suspiciously convenient, doesn’t it? A giant ape is certain to have
lived on Earth for many millions of years, while a giant ape-like creature is
alleged to be currently living in deeply forested areas around the globe. Only
people of high intelligence and extensive specialized training would flagrantly
ignore such an obvious connection. Only those with, say, anthropological Ph.D.’s
could safely deny such a probable likelihood. That’s why we pay them the big
bucks and hire them to teach our children. They are beyond reproach.
A BIT OF MEA CULPA
I’m being facetious and even a tad mean-spirited here because I want to be
certain no one misses the point: Miocene apes are perfect candidates for all the
various hominoids that are alleged to live around the world, and not just the
bigfoot kind. There are at least three other types of varying sizes (two
different man-sized ones and a pygmy type), and quite possibly multiple examples
within the four size-based categories (the way there are two distinct types of
chimps and gorillas). There seems to be at least three types of bigfoot.
Imagine this scenario: Instead of 50+ Miocene apes, there might have been
only, say, a dozen or so, with regional variations classified as 50+ different
species due to the scarcity of their fossils. Of those dozen, maybe six were
quadrupeds and six were bipeds, with the bipeds being substantially more
intelligent, more active, and more wide-ranging than the down-on-all-fours
genetic kin. All twelve passed the millennia in their own time-tested fashions
and continue living alongside us humans today. None went extinct.
For as radical as that scenario might sound at first, the facts as they exist
make it far more logical and probable than the current anthropological dogma
that all Miocene apes were quadrupeds, and that despite living in stasis for
millions of years, dozens inexplicably went extinct and left only the six we
classify today. And please don’t harass me with this old saw: “If hominoids are
real, why don’t we know about them? Why don’t we ever see them? Where are they?
Where are their dead bodies?” People who ask such questions are simply ignorant
of an astonishing array of valid research and hard data that exist but are
ignored by mainstream science because it doesn’t conform to their current dogma.
We do know about hominoids; we do see them regularly; every single day at
some place on the planet some human encounters one or more of them. They are out
there living by the thousands…by the hundreds of thousands in order to maintain
breeding populations. But because these facts represent such a severe diminution
of our knowledge about the world around us, and equally diminishes our sense of
control over everything around us, we are far more comfortable rejecting it as a
possibility. When the day comes for some lucky soul to finally cram this blatant
reality down our collectively unwilling throats, we will all get up the next day
and go to work as we have every day prior. But we will never be the same after
that day, not ordinary people and especially not mainstream scientists.
That is why we are not told these things in a truthful, realistic way. Those
in positions of power and authority do not believe we can handle it. My
contention is that it is they, not us, who can’t handle such stark facts…but I
could be mistaken. The rampant success of tabloids is a powerful indicator that
John and Jane Q. Public might not be quite ready to confront the notion that
everything they know about their genesis is stone cold wrong.
Fortunately, the situation isn’t subject to indefinite manipulation. No
matter how much those in control ignore, reject, or ridicule unacceptable
information, it is out there, it is true, and time will eventually prove its
reality. Meanwhile, the rest of us can only wait for the next—perhaps
final—crack in the dam of fear that keeps us all mired in ignorance.
ESSAY ON CARPENTER GENES
Why Darwinian Evolution Is Flatly Impossible
by Lloyd Pye
This was in Australia's Exposure Magazine in November 1998.
No matter how high evidence was stacked up against evolution in the past,
Darwinists could always slip through the "...it COULD have happened..."
loophole. As long as genetic mutations and slight physical changes
(microevolution) were evident, interspecies transitions (macroevolution) had to
be accepted as at least plausible. Not any more. In five brief pages, this
article closes the Darwinian loophole, and evolutionary science will never be
-David Summers, Publisher/Editor
Remembrance of Things Past
1999 will be the 140th anniversary of the publication of Charles Darwin’s On
The Origin Of Species. In that landmark volume he postulated that life on Earth
had developed into its millions of forms through a long, slow series of
fundamental changes in the physical structure of all living things, plants and
animals alike. Though small and gradual, these changes would be relatively
constant. Bit by imperceptible bit, gills would turn into lungs, fins would turn
into limbs, scales would turn into skin, bacteria would turn into us. The
problem for Darwin, and for all Darwinists since, came when the mechanism behind
those changes had to be explained.
Because Darwin’s era was only beginning to understand cellular function (Gregor
Mendel’s treatise on genetics did not appear until 1865), Darwin proposed a
system of gradual physiological improvements due to small, discreet advantages
that would accrue to the best-adapted progeny (his famous “survival of the
fittest”) among all living things (a bit stronger, a bit swifter, a bit
hardier), making them subtly different from their parents and producing
offspring with similar advantages accruing in their physiological makeup. When
enough small changes had compounded themselves through enough generations ....
voila! A new species would have emerged, sexually incompatible with the original
parent stock, yet inexorably linked to it by a common physiological heritage.
Once cellular function came to be better understood, particularly the
importance of DNA as the “engineer” driving the entire train of life, it was
quickly embraced as the fundamental source of change in Darwin’s original model.
Darwinian evolution, as it came to be called, was indisputably caused by
mutations at the genetic level. Because such mutations were obvious to early
geneticists, and could eventually be induced and manipulated in their
laboratories, it seemed beyond doubt that positive mutations in DNA sequencing
were the key to explaining evolution. That left neutral mutations exerting no
effect, while negative mutations afflicted only the unlucky individuals who
expressed them but had no lasting impact on a species’ collective gene pool.
Darwin's Blackest Box
In 1996 Michael Behe, a biochemistry professor at Lehigh University in
Bethlehem, Pa., published a book called Darwin’s Black Box. He defined a “black
box” as any device that functions perfectly well, but whose inner workings
remain mysterious because they cannot be seen or understood. To Charles Darwin
the living cell was an impenetrable black box whose inner workings he could not
even imagine, much less understand. To scientists today the cell box is no
longer quite as black, but it is still dark enough to leave them with only a
faint understanding of how it works. They know its basic components and the
functions of those components, but they still don’t know how all those pieces
fit together to do what cells do--live.
Life is still every bit the profound mystery it was in Darwin’s day. Many
additional pieces of the puzzle have found their way onto the table since 1859,
but scientists today are not much closer to seeing the whole picture than Darwin
or his cronies. That is an ironic reality which few modern Darwinists will
accept in their own hearts and minds, much less advertise to the world in
general. So they supply the media with intellectual swill that the media, in
turn, unknowingly palms off as truth, while the scientists edgily cross their
fingers and hold their breath in the hope that someday, maybe even someday soon,
but certainly before the great unwashed get wise to the scam, they will finally
figure out the great secret...they will see into the heart of the universe’s
blackest box...they will understand how life actually works, from the first
moment of the first creation to evolution itself.
Shall We Gather At The River?
Darwinists teach and preach that life began spontaneously in a mass of
molecules floating freely in the Earth’s earliest rivers and seas. Those
molecular precursors somehow formed themselves into organic compounds that
somehow formed themselves into the very first living organism. This incredible
feat of immaculately choreographed bioengineering was, Darwinists insist,
accomplished without the aid of any outside agency, such as a Prime Mover (what
some would call “God”), and especially not anything extraterrestrial. It was
done using only the materials at hand on the early Earth, and accomplished
solely by the materials themselves, with a probable assist from a perfectly
timed, perfectly aimed lightning bolt that, in the most serendipitous moment
imaginable, swirled tens of thousands, or even hundreds of thousands of
inanimate molecules into a living entity.
For as glibly as Darwinists have fashioned and promoted this scenario in
schools to this day, the complexity of its mechanics might challenge the
creative skills of a busload of Prime Movers. Countless lipids have to somehow
be coaxed to form a membrane that somehow surrounds enough strands of DNA to
create a cell that can manage life’s two most basic functions: it must absorb
organic and inorganic compounds in its environment and turn them into proteins,
which can then be converted into energy and excreta; and it must have the
ability to reproduce itself ad infinitum. If all of those varied factors, each a
bona fide miracle in itself, do not occur in the precise order demanded by all
living cells for their tightly orchestrated, step-by-step development, then the
entire process becomes laughably improbable.
British astronomer Fred Hoyle has offered the classic analogy for this
scenario, stating that its actual likelihood of being true and real equals “that
of a tornado sweeping through a junkyard and correctly assembling a Boeing 747.”
It did not and could not happen then, just as it cannot be made to happen now.
The very best our biochemists can do today is construct infinitesimal pieces of
the puzzle, leaving them little nearer to seeing how life truly works than
Darwin and his cohorts 140 years ago. But why? What’s the problem? Haven’t we
cracked the atom? Haven’t we flown to the moon? Haven’t we mapped the ocean
floors? Yes, yes, and yes. But those things were easy by comparison.
Looking For Life In All The Wrong Places
If the Darwinists are so wrong, where are they wrong? What is the fundamental
mistake they are making? It has to do with where they are looking, which is the
cell, inside the cell, and specifically at the functioning of DNA. Because the
twisting double-helix of DNA contains the instructions for all of life’s
processes, the assumption has always been that disruptions in the patterns of
those instructions are the only logical explanation for how physiological
changes at both the micro (small) and macro (large) level must be created and
executed. In other words, changes in DNA (mutations) must be the engine driving
all aspects of evolutionary change. Nothing else makes sense.
Sensible or not, however, it is wrong. Why? Because in 1984 a group of
British researchers decided to do an experiment utilizing what was then
considered to be a universal truth about genes, handed down from Gregor Mendel
himself: the idea that genes are sexless. Mendel had postulated that a gene from
either parent, whether plant or animal, was equally useful and effective
throughout the lifetime of the individual possessing it. This was taken as
gospel until those British researchers tried to create mouse embryos carrying
either two copies of “father” genes or two copies of “mother” genes. According
to Mendel’s laws of inheritance, both male and female embryos should have
developed normally. After all, they had a full complement of genes, and if genes
were indeed sexless they had all they needed to gestate and thrive.
The researchers were stunned when all of their carefully crafted embryos were
dead within a few days of being transferred to a surrogate mother’s womb. How
could it happen? What could have gone so wrong in a scenario that couldn’t go
wrong? They were completely baffled. What they didn’t know, and what many refuse
to accept even now, fourteen years later, is that they had unwittingly opened
their own--and their icon’s--darkest, blackest box. They had ventured into a
region of the cell, and of the functioning of DNA, that they hadn’t imagined was
off-limits. By taking that inadvertent journey they ended up forging an entirely
new understanding of Mendelian inheritance, while driving a stake through the
already weakened heart of Darwinian evolution.
A Time To Live And A Time To Die
Normally, father genes or mother genes control the expression of their own
activity. A father gene might give, for example, the signal for a crop of head
hair to grow--to “express” itself--and to stop expressing when the follicles had
been constructed in their proper places in the scalp. The cessation of the
expressing process is called methylation, which is the surrounding of expressing
genes with clusters of chemicals that shut them off (picture the cap being put
back on a toothpaste tube). In the same way, a mother gene might express a pair
of eyes and then, when they were completed, “methylate” the gene’s growth
processes into inactivity.
Until 1984, it was believed that all genetic function operated the same way.
If a gene or suite of genes came from Dad’s side of the mating process, then
those genes managed their own affairs from birth until death. And the same held
true for genes coming from Mom’s side of the mating. But certain genes turned
out to exhibit radical differences, depending on whose side of the mating
process they came from. When the female mouse embryos died, it was found that
genes vital to their growth had inexplicably never been turned on at all, while
still others were never turned off (methylated) and spiraled unchecked into
cancers. Even more baffling, the fatal processes in the all-male embryos were
entirely different from those in the all-females. The embryos were dying for
reasons that were clearly sex-biased. What could it possibly mean?
Imprinted genes were found to be the culprit. Imprinted genes, it turned out,
could be expressed by either parent and, incredibly, methylated by the other
parent! Somehow, someway, by means not clearly imagined, much less understood,
genes from one parent had the ability to independently begin or end processes
that were critical to the lives of forming embryos. In the world of genetics as
it had always been perceived, that was impossible. Only a localized (sexless)
gene should be able to control its own destiny or purpose, not a separate gene
from an entirely different parent. Cooperating genes broke all the rules of
physical inheritance that had been written by Gregor Mendel. Yet imprinted genes
do, in fact, disregard Mendel’s rules; and by doing so they provide the above
mentioned stake that will inevitably be driven through the heart of classic
Life's Blueprint Writ Wrong
So far geneticists have identified about 20 imprinted genes embedded within
the 80,000 to 100,000 believed to comprise the entire human genome. New ones are
discovered on a regular basis, with many geneticists predicting the final tally
will reach hundreds, while others suspect the total might reach into the
thousands. But whether hundreds or thousands, any imprinted genes at all means
that classic Darwinism can no longer count on mutations in DNA as a plausible
mechanism for fundamental physical change.
For mutations to be acceptable as the engine of Darwinian change, they have
to be able to occur in isolation and then, as stated earlier, pass themselves
intact to succeeding generations. By definition that means they have to be able
to regulate their own functions, both to express and to methylate their genetic
processes. Whenever a trait mutates, whether a longer limb, a stronger muscle,
or a more efficient organ, it should pass into the gene pool whole and complete,
not half of it being expressed from the male side of a pairing and half from the
female side. Why? Because both parents would have to mutate in complementary
ways at the same time to the same degree...and then they would have to find each
other and mate in order to have even a chance to pass the mutation on!
Natural mutations, while statistically rare, are clearly documented. They can
be neutral, negative, or positive. So when geneticists contend that isolated
mutations in DNA can occur and be passed on to succeeding generations, they
first assume the individual with the mutation has been fortunate enough to have
the correct one out of the three possibilities. They further assume the
individual survives the brutal winnowing process Darwin so correctly labeled
“survival of the fittest.” But fittest or not, any fledgling animal or plant
must contend with an infinite number of ways to miss the boat to maturity.
Assuming that passage is safe, the lucky individual with the positive mutation
has to get lucky several more times to produce enough offspring so that at least
a few of them possess his or her positive mutation and also survive to maturity
to pass it along. It is a series of events that, taken altogether, are extremely
unlikely but at least they are feasible, and they do, in fact, happen.
Imprinted genes, however, neatly sever those threads of feasibility by making
it literally impossible for any mutation, positive or otherwise, to effect more
than the individual expressing it. There is certainly no way for it to work its
way into a gene pool regulated by imprinted genes. Why? For the reasons just
stated above: for a mutation to be implemented, it must be beneficial and it
must be paired with a similar change in a member of the opposite sex. Thus, if
only a handful of genes are capable of being turned on and off by different
parents, then Darwinian evolution has no place in the grand scheme of life on
Earth. Imprinting shoves Darwinists well beyond any hope of feasibility, to a
region of DNA where change is incapable of being positive.
Timing Really Is Everything
What we are really talking about with imprinting processes is timing, the
most exquisite and incomprehensible faculty any gene possesses. By knowing
when--and being able--to turn on and off the millions to billions of biological
processes that create and sustain living organisms, genes control the switches
that control life itself. In effect, whatever controls the timing switches
controls the organism. If, for example, only one methyl group misses its
turn-off signal on an expressing gene, the resultant non-stop expressing will
lead to cellular overproduction and, ultimately, cancer. Conversely, if only one
gene fails to express when it should, at the very least a seriously negative
event has occurred, and at worst the organism has suffered a catastrophe that
will terminate its life.
More important than this, however, is that timing sequences cannot be altered
in any way, shape, or form that will not be detrimental to offspring. In other
words, the “evolution” of a timing sequence in the development of an embryo or a
growing offspring simply cannot be favorable in the Darwinian sense. Why?
Because in terms of results it is already perfect. And how do we know it is
perfect? Because the parents both reached maturity. What is so special about
their reaching maturity? It means their own timing sequences performed perfectly
in their own embryos, with their initial sperm and egg differentiating in
millions of ways to become their bodies. (In plants the same principle holds
true). Then their growing period developed perfectly, with its millions of
different timing events leading to their limbs and organs growing to their
proper sizes and carrying on their proper functions.
Any alteration of that perfection can be, and nearly always is, devastating.
In golf a putt drops or it doesn’t. In timing sequences, they are started and
stopped precisely, or not. There is no room for error or improvement (no third
condition called “better”). Thus, no genetic alteration to timing can create the
faster legs, larger horns, sharper teeth, etc., called for by Darwin’s theory of
piecemeal change. This is why gills cannot become lungs, why fins cannot become
limbs, why scales cannot become fur or skin. No single timing mechanism can
“evolve” without altering the perfection that has been passed to offspring by
parents through untold generations.
A good analogy is the building of a house. We start with a blueprint.
Analogize this with the genetic blueprint provided by DNA. The former outlines
the physical materials that go into a house: wood, nails, sheetrock, doors, etc.
The latter outlines the physical materials that go into creating a body: blood,
bones, skin, hair, etc. Next, we bring in the carpenters who will build the
house. It is they who, following our carefully drawn blueprint, will determine
everything that will be done to create our house. More importantly, they will
determine when all parts of the house will be built, when any particular process
will start and when it will stop. They will build the floor before the walls,
the walls before the roof, etc.
Building our house is thus a two-part project: what to build, and how and
when to build it. It is the same with living organisms, whose carpenter genes
(the mysterious timing mechanisms that turn growth processes on and off)
determine their success. Now it becomes easy to understand Darwin’s fundamental
error. While examining the widely varied houses of living organisms, he saw no
trace of the invisible carpenters who have the decisive hand in their creation.
Therefore, his theory did not--and so far cannot--account for the fact that
carpenter genes invariably prohibit alterations.
If I Had A Hammer
As with a house, DNA contains or provides everything necessary to create a
particular organism, whether animal or plant. DNA has the further capacity to
define and manufacture the physiological materials needed to create the entirety
of the organism, precisely when they are needed and to the exact degree they are
needed. And, perhaps most wondrous of all, DNA contains the ineffable carpenter
genes that determine when each phase of the organism’s construction will begin
and end. Any organism’s parents will have passed to it a set of DNA blueprints
of what to build and how to build it, which are nearly always perfect with
respect to timing, but allowing slight variations in what is built. On the
occasions when faulty timing does lead to tragedy, the imperfections are due to
sperm-egg misconnects, or molecular anomalies in DNA caused by radiation or
Where classic Darwinian evolution completely breaks down is in not allowing
carpenter genes to exist separately from end results. Darwinism contends that
when any aspect of an organism’s materials change (i.e., a mutation in some
strand of DNA which changes some aspect of physical structure), that organism’s
carpenter genes smoothly accommodate the change (alter the blueprint) by
adjusting the timing sequences (beginning and end) of that structure’s
development. This is not reality. A Watusi’s thighbone takes just as long to
form as a Pygmy’s thighbone (about 18 years), so only the end results--their
respective sizes--have changed, not their timing processes. This is one reason
why all human beings can so easily interbreed, even the unlikely combination of
Watusis and Pygmies. Our vast array of underlying genetic timing mechanisms,
including our imprinted genes, have been handed down intact (unevolved!) since
the beginning of our existence as a species.
Thus, what is built can be slowly, gradually altered; how it is built cannot.
This obvious fact...this undeniable truth...has the most profound implications:
In the carpenter genes of successful organisms, no improvement is possible! And
without improvement, via Darwinian change, how could they have evolved? Not just
into something from nothing, but into millions of interlocking, tightly
sequenced commands that smoothly mesh over extended periods as organisms develop
from embryo to birth to sexual maturity? The short answer is, “They can’t.”
What all this means, of course, is that everything we think we know about how
life develops on Earth is flatly wrong. It means all of our “experts” are
totally mistaken when they tell us that Darwin’s theory of gradual mutations has
led to the development of all species of plants and animals on the planet.
Nothing could be further from the truth. Darwinism cannot work now, it has never
been able to work, and the time has come for its supporters to stop their
intellectual posturing and admit they need to go back to their drawing boards to
seek a more plausible explanation for what is surely life’s greatest single
Copyright 2006 by Lloyd Pye.
Presented with permission of the author.
Recommend this website to your friends:
Subject Related Articles by Lloyd Pye:
The best way to contact us is by e-mail: email@example.com