Evolution: Possible or Impossible? - free e-book - by Dr. James F. Coppedge

"LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information": - William Dembski - Robert Marks

William Dembski Is Interviewed By Casey Luskin About Conservation Of Information - Audio

Evolutionary Informatics Lab - Main Publications

God by the Numbers - Charles Edward White - William Dembski
Excerpt: "Even if we limit the number of necessary mutations to 1,000 and argue that half of these mutations are beneficial, the odds against getting 1,000 beneficial mutations in the proper order is 2^1000. Expressed in decimal form, this number is about 10^301. 10^301 mutations is a number far beyond the capacity of the universe to generate. Even if every particle in the universe mutated at the fastest possible rate and had done so since the Big Bang, there still would not be enough mutations."

The Universal Plausibility Metric (UPM) & Principle (UPP) - Abel - Dec. 2009
Excerpt: Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes.",,,

cΩu = Universe = 10^13 reactions/sec X 10^17 secs X 10^78 atoms = 10^108

cΩg = Galaxy = 10^13 X 10^17 X 10^66 atoms = 10^96

cΩs = Solar System = 10^13 X 10^17 X 10^55 atoms = 10^85

cΩe = Earth = 10^13 X 10^17 X 10^40 atoms = 10^70


Programming of Life - Probability - Defining Probable, Possible, Feasible etc.. - video

Indeed, math is not kind to Darwinism in the least when considering the probability of humans 'randomly' evolving:

'In Barrow and Tippler's book The Anthropic Cosmological Principle, they list ten steps necessary in the course of human evolution, each of which, is so improbable that if left to happen by chance alone, the sun would have ceased to be a main sequence star and would have incinerated the earth. They estimate that the odds of the evolution (by chance) of the human genome is somewhere between 4 to the negative 180th power, to the 110,000th power, and 4 to the negative 360th power, to the 110,000th power. Therefore, if evolution did occur, it literally would have been a miracle and evidence for the existence of God.'
William Lane Craig

William Lane Craig - If Human Evolution Did Occur It Was A Miracle - video

Along that same line:

Darwin and the Mathematicians - David Berlinski
"The formation within geological time of a human body by the laws of physics (or any other laws of similar nature), starting from a random distribution of elementary particles and the field, is as unlikely as the separation by chance of the atmosphere into its components.”
Kurt Gödel, was a preeminent mathematician who is considered one of the greatest to have ever lived. Of Note: Godel was a Theist!

“Darwin’s theory is easily the dumbest idea ever taken seriously by science."
Granville Sewell - Professor Of Mathematics - University Of Texas - El Paso

Waiting Longer for Two Mutations - Michael J. Behe
Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that 'for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years' (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless "using their model" gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model.

Whale Evolution Vs. Population Genetics - Richard Sternberg PhD. in Evolutionary Biology - video

Can Darwin’s enemy, math, rescue him? - May 2011


Falsification Of Neo-Darwinism by Quantum Entanglement/Information

Doug Axe Knows His Work Better Than Steve Matheson
Excerpt: Regardless of how the trials are performed, the answer ends up being at least half of the total number of password possibilities, which is the staggering figure of 10^77 (written out as 100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000). Armed with this calculation, you should be very confident in your skepticism, because a 1 in 10^77 chance of success is, for all practical purposes, no chance of success. My experimentally based estimate of the rarity of functional proteins produced that same figure, making these likewise apparently beyond the reach of chance.

Evolution vs. Functional Proteins - Doug Axe - Video

Stephen Meyer - Proteins by Design - Doing The Math - video

Signature in the Cell - Book Review - Ken Peterson
Excerpt: If we assume some minimally complex cell requires 250 different proteins then the probability of this arrangement happening purely by chance is one in 10 to the 164th multiplied by itself 250 times or one in 10 to the 41,000th power.

Professor Harold Morowitz shows the Origin of Life 'problem' escalates dramatically over the 1 in 10^40,000 figure when working from a thermodynamic perspective,:

"The probability for the chance of formation of the smallest, simplest form of living organism known is 1 in 10^340,000,000. This number is 10 to the 340 millionth power! The size of this figure is truly staggering since there is only supposed to be approximately 10^80 (10 to the 80th power) electrons in the whole universe!"
(Professor Harold Morowitz, Energy Flow In Biology pg. 99, Biophysicist of George Mason University)

Dr. Don Johnson lays out some of the probabilities for life in this following video:

Probabilities Of Life - Don Johnson PhD. - 38 minute mark of video
a typical functional protein - 1 part in 10^175
the required enzymes for life - 1 part in 10^40,000
a living self replicating cell - 1 part in 10^340,000,000

Programming of Life - Probability of a Cell Evolving - video
Programming of Life - video playlist:

Book Review - Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009.
Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome.
So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail.

Michael Behe - Life Reeks Of Design - 2010 - video

Bacterial Flagellum - A Sheer Wonder Of Intelligent Design - video

Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors
Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,,

Testable hypotheses about FSC

What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses:

Null hypothesis #1
Stochastic ensembles of physical units cannot program algorithmic/cybernetic function.

Null hypothesis #2
Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function.

Null hypothesis #3
Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function.

Null hypothesis #4
Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time.

We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified.

The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010
Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.”

The Law of Physicodynamic Incompleteness - David L. Abel - August 2011
Summary: “The Law of Physicodynamic Incompleteness” states that inanimate physicodynamics is completely inadequate to generate, or even explain, the mathematical nature of physical interactions (the purely formal laws of physics and chemistry). The Law further states that physicodynamic factors cannot cause formal processes and procedures leading to sophisticated function. Chance and necessity alone cannot steer, program or optimize algorithmic/computational success to provide desired non-trivial utility.

The GS (genetic selection) Principle – David L. Abel – 2009
Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.”

Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast:

Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast

Programming of Life - Information - Shannon, Functional & Prescriptive - video

Intelligent Design - The Anthropic Hypothesis

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…