Wintery Knight

…integrating Christian faith and knowledge in the public square

Is the peer-review system objective and reliable?

When I argue, if what I am arguing can be supported by peer-reviewed literature, then I always appeal to peer-reviewed literature. But there are problems coming out with this system.

Look at this article from Evolution News.

Excerpt:

Darwinists have had to back off considerably from the once-confident assertion that peer review in science journals constitutes, as Jerry Coyne put it in 2005 in The New Republic, the “gold standard for modern scientific achievement.” The whole institution of peer review is so besmirched now as to arouse, not even amusement anymore, but something more like pity.

In the same article, Coyne maintained that it was precisely by “By that standard” that advocates of the theory of intelligent design “have failed miserably.” You mean by the standard of what is now revealed as the intellectual and scientific equivalent of insider trading? Or more like racketeering and simple fraud.

The existence of a blog like Retraction Watch is, in this respect, a sign of the times, a measure of the extent to which science publishing has fallen into derision. Their post from a couple of days ago, on a “peer review and citation ring at the Journal of Vibration and Control,” has been widely reported, including the retraction of 60 papers from that journal. Sixty!

“This one deserves a ‘wow,'” observes author Ivan Oransky. Indeed. The cat is really out of the bag.

Slate:

It may not be entirely fair to liken a “peer review and citation ring” to the academic version of an extortion ring, but there’s certainly fraud involved in both. Retraction Watch, a blog dedicated to chronicling which academic papers have been withdrawn, is reporting that SAGE Publishing, a group that puts out numerous peer-reviewed journals, is retracting 60 papers from its Journal of Vibration and Control after an internal investigation uncovered extensive evidence of severe peer-review fraud.

Apparently researcher Peter Chen, formerly of National Pingtung University of Education in Taiwan, made multiple submission and reviewer accounts — possibly along with other researchers at his institution or elsewhere — so that he could influence the peer review system. When Chen or someone else from the ring submitted a paper, the group could manipulate who reviewed the research, and on at least one occasion Chen served as his own reviewer.

Previously, I blogged about the problems in the peer-review process, linking to an article from The Economist and to a podcast featuring Tulane University physicist Frank Tipler.

I’m still going to argue peer-reviewed evidence if I have it, but if some is used against me, I’ll have to take a closer look.

Filed under: News, , , , , , , ,

Physicist Frank Tipler on the usefulness of refereed journals, then and now

I really enjoyed this episode of the ID the Future podcast.

Description:

Is the only good science peer-reviewed science? Are there other avenues to present important scientific work? On this episode of ID The Future, Professor of Mathematics Dr. Frank Tipler discusses the pros and cons of peer review and refereed journals. More than fifty peer-reviewed papers discussing intelligent design have been published, but critics of the theory still proclaim a lack of peer-reviewed work as an argument. Listen in as Tipler shows how things have changed with the peer review process and what we can do about it.

About the speaker:

Frank Tipler was born and raised in Andalusia, Alabama. His first science project was a letter written in kindergarten to Werner von Braun, whose plans to launch the first earth satellite were then being publicized. Von Braun’s secretary replied, regretting he had no rocket fuel for Tipler as requested. By age five, he knew he wanted to be an astrophysicist. But he’s always been a polymath, reading widely across disciplines and into the history of science and theology. After graduating from MIT and the University of Maryland, he did postdoctoral work at Oxford and Berkeley, before arriving at Tulane in 1981.

Whenever William Lance Craig often cites a book by two physicists named “Barrow and Tipler” called “The Anthropic Cosmological Principle” (Oxford University Press, 1988) in his debates to support the fine-tuning argument.  This Tipler is that Tipler! Dr. Tipler is a master of the physics of cosmology and fine-tuning. However, I definitely disagree with him on some of his ideas.

The MP3 file is here. (17 minutes)

Topics:

  • the changing nature of refereed journals and peer-review
  • previously, the refereed journals were more about communication
  • now, ideas are not taken seriously unless they are published in these journals
  • the problem is that referees can be motivated by ideological concerns
  • before, an obscure patent official named Einstein submitted a physics paper and it was published
  • now, an uncredited person would not be able to have a brilliant paper published like that
  • today, there are so many scientists that many more papers are submitted
  • although it restricts BAD ideas, it can also end up censoring NEW ideas
  • the problem is that any really brilliant idea has to go against the prevailing consensus
  • peer-review may actually be holding back the progress of science by censoring NEW ideas
  • some referees are motivated to censor ideas that undercut their reputation and prestige
  • Dr. Tipler was told to remove references to intelligent design before one of his papers would be published
  • how scientists with NEW ideas can bypass the system of refereed journals when they are censored
  • peer-review has value when it finds errors, but not when it suppresses new ideas

I think this one is a must listen. As much as I like peer-reviewed research, it’s important to acknowledge the limitations. I think if you’re going into a debate, you definitely want to be the one with the peer-reviewed evidence. Let the other guy be the one making assertions and stating his preferences and opinions. But that doesn’t mean that the peer-review process can’t be improved – I think that it can be improved.

Here is a listing of some recent peer-reviewed publications related to intelligent design.

Filed under: Podcasts, , , , , , , , , , , , , ,

Walter Bradley: three scientific evidences that point to a designed universe

Dr. Walter L. Bradley

Dr. Walter L. Bradley

Dr. Walter L. Bradley (C.V. here) is the Distinguished Professor of Engineering at Baylor.

Here’s a bio:

Walter Bradley (B.S., Ph.D. University of Texas at Austin) is Distinguished Professor of Engineering at Baylor. He comes to Baylor from Texas A&M University where he helped develop a nationally recognized program in polymeric composite materials. At Texas A&M, he served as director of the Polymer Technology Center for 10 years and as Department Head of Mechanical Engineering, a department of 67 professors that was ranked as high as 12th nationally during his tenure. Bradley has authored over 150 refereed research publications including book chapters, articles in archival journals such as the Journal of Material Science, Journal of Reinforced Plastics and Composites, Mechanics of Time-Dependent Materials, Journal of Composites Technology and Research, Composite Science and Technology, Journal of Metals, Polymer Engineering and Science, and Journal of Materials Science, and refereed conference proceedings.

Dr. Bradley has secured over $5.0 million in research funding from NSF grants (15 yrs.), AFOSR (10 years), NASA grants (10 years), and DOE (3 years). He has also received research grants or contracts from many Fortune 500 companies, including Alcoa, Dow Chemical, DuPont, 3M, Shell, Exxon, Boeing, and Phillips.

He co-authored The Mystery of Life Origin: Reassessing Current Theories and has written 10 book chapters dealing with various faith science issues, a topic on which he speaks widely.

He has received 5 research awards at Texas A&M University and 1 national research award. He has also received two teaching awards. He is an Elected Fellow of the American Society for Materials and the American Scientific Affiliation (ASA), the largest organization of Christians in Science and Technology in the world. He is President elect of the ASA and will serve his term in 2008.

You can read more about his recent research in this article from Science Daily.

Below, I analyze a lecture entitled “Is There Scientific Evidence for an Intelligent Designer?”. Dr. Bradley explains how the progress of science has made the idea of a Creator and Designer of the universe more acceptable than ever before.

The MP3 file is here.

Evidence #1: The design of the universe

1. The correspondence of natural phenomena to mathematical law

  • All observations of physical phenomena in the universe, such as throwing a ball up in the air, are described by a few simple, elegant mathematical equations.

2. The fine-tuning of physical constants and rations between constants in order to provide a life-permitting universe

  • Life has certain minimal requirements; long-term stable source of energy, a large number of different chemical elements, an element that can serve as a hub for joining together other elements into compounds, etc.
  • In order to meet these minimal requirements, the physical constants, (such as the gravitational constant), and the ratios between physical constants, need to be withing a narrow range of values in order to support the minimal requirements for life of any kind.
  • Slight changes to any of the physical constants, or to the rations between the constants, will result in a universe inhospitable to life.
  • The range of possible ranges over 70 orders of magnitude.
  • Although each individual selection of constants and ratios is as unlikely as any other selection, the vast majority of these possibilities do not support the minimal requirements of life of any kind. (In the same way as any hand of 5 cards that is dealt is as likely as any other, but you are overwhelmingly likely NOT to get a royal flush. In our case, a royal flush is a life-permitting universe).

Examples of finely-tuned constants and ratios: (there are more examples in the lecture)

a) The strong force: (the force that binds nucleons (= protons and neutrons) together in nucleus, by means of meson exchange)

  • if the strong force constant were 2% stronger, there would be no stable hydrogen, no long-lived stars, no hydrogen containing compounds. This is because the single proton in hydrogen would want to stick to something else so badly that there would be no hydrogen left!
  • if the strong force constant were 5% weaker, there would be no stable stars, few (if any) elements besides hydrogen. This is because you would NOT be able to build up the nuclei of the heavier elements, which contain more than 1 proton.
  • So, whether you adjust the strong force up or down, you lose stars than can serve as long-term sources of stable energy, or you lose chemical diversity, which is necessary to make beings that can perform the minimal requirements of living beings. (see below)

b) The conversion of beryllium to carbon, and carbon to oxygen

  • Life requires carbon in order to serve as the hub for complex molecules, but it also requires oxygen in order to create water.
  • Carbon is like the hub wheel in a tinker toy set: you can bind other elements together to more complicated molecules (e.g. – “carbon-based life), but the bonds are not so tight that they can’t be broken down again later to make something else.
  • The carbon resonance level is determined by two constants: the strong force and electromagnetic force.
  • If you mess with these forces even slightly, you either lose the carbon or the oxygen.

3. Fine-tuning to allow a habitable planet

  • A number of factors must be fine-tuned in order to have a planet that supports life
  • Initial estimates predicted abundant life in the universe, but revised estimates now predict that life is almost certainly unique in the galaxy, and probably unique in the universe.
  • Even though there are lots of stars in the universe, the odds are against any of them supporting complex life.
  • Here are just a few of the minimal requirements for habitability: must be a single star solar system, in order to support stable planetary orbits, the planet must be the right distance from the sun in order to have liquid water at the surface, the planet must sufficient mass in order to retain an atmosphere, etc.

The best non-theistic response to this argument is to postulate a multiverse, but that is very speculative and there is no experimental evidence that supports it.

Evidence #2: The origin of the universe

1. The progress of science has shown that the entire physical universe came into being out of nothing (= “the big bang”). It also shows that the cause of this creation event is non-physical and non-temporal. The cause is supernatural.

  • Atheism prefers an eternal universe, to get around the problem of a Creator having to create the universe.
  • Discovery #1: Observations of galaxies moving away from one another confirms that the universe expanded from a single point.
  • Discovery #2: Measurements of the cosmic background radiation confirms that the universe exploding into being.
  • Discovery #3: Predictions of elemental abundances prove that the universe is not eternal.
  • Discovery #4:The atheism-friendly steady-state model and oscillating model were both falsified by the evidence.
  • And there were other discoveries as well, mentioned in the lecture.

The best non-theistic response to this argument is to postulate a hyper-universe outside of ours, but that is very speculative and there is no experimental evidence that supports it.

Evidence #3: The origin of life

1. The progress of science has shown that the simplest living organism contains huge amounts of biological information, similar to the Java code I write all day at work. This is a problem for atheists, because the sequence of instructions in a living system has to come together all at once, it cannot have evolved by mutation and selection – because there was no replication in place prior to the formation of that first living system!

  • Living systems must support certain minimum life functions: processing energy, storing information, and replicating.
  • There needs to be a certain amount of complexity in the living system that can perform these minimum functions.
  • But on atheism, the living system needs to be simple enough to form by accident in a pre-biotic soup, and in a reasonable amount of time.
  • The minimal functionality in a living system is a achieved by DNA, RNA and enzymes. DNA and RNA are composed of sequences of proteins, which are in turn composed of sequences of amino acids.

Consider the problems of building a chain of 100 amino acids

  • The amino acids must be left-handed only, but left and right kinds are equally abundant in nature. How do you sort out the right-handed ones?
  • The amino acids must be bound together using peptide bonds. How do you prevent other types of bonds?
  • Each link of the amino acid chain needs to be carefully chosen such that the completed chain with fold up into a protein. How do you choose the correct amino acid for each link from the pool of 20 different kinds found in living systems?
  • In every case, a human or other intelligence could solve these problems by doing what intelligent agents do best: making choices.
  • But who is there to make the choices on atheism?

The best current non-theistic response to this is to speculate that aliens may have seeded the Earth with life at some point in the past.

The problem of the origin of life is not a problem of chemistry, it is a problem of engineering. Every part of car functionality can be understood and described using the laws of physics and chemistry. But an intelligence is still needed in order to assemble the components into a system that has the minimal requirements for a functioning vehicle.

Filed under: Podcasts, , , , , , , , , , , , , , , , , , , , , ,

The Economist: some problems with the peer-review process

From The Economist, of all places.

Excerpt:

The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry.

It is tempting to see the priming fracas as an isolated case in an area of science—psychology—easily marginalised as soft and wayward. But irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

Let’s take a look at some of the problems from the article.

Problems with researcher bias:

Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”, says Sandy Pentland, a computer scientist at the Massachusetts Institute of Technology.

Problems with journal referees:

Another experiment at the BMJ showed that reviewers did no better when more clearly instructed on the problems they might encounter. They also seem to get worse with experience. Charles McCulloch and Michael Callaham, of the University of California, San Francisco, looked at how 1,500 referees were rated by editors at leading journals over a 14-year period and found that 92% showed a slow but steady drop in their scores.

As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyse the data presented from scratch, contenting themselves with a sense that the authors’ analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.

Problems with fraud:

Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. Dr Fanelli has looked at 21 different surveys of academics (mostly in the biomedical sciences but also in civil engineering, chemistry and economics) carried out between 1987 and 2008. Only 2% of respondents admitted falsifying or fabricating data, but 28% of respondents claimed to know of colleagues who engaged in questionable research practices.

Problems releasing data:

Reproducing research done by others often requires access to their original methods and data. A study published last month inPeerJ by Melissa Haendel, of the Oregon Health and Science University, and colleagues found that more than half of 238 biomedical papers published in 84 journals failed to identify all the resources (such as chemical reagents) necessary to reproduce the results. On data, Christine Laine, the editor of the Annals of Internal Medicine, told the peer-review congress in Chicago that five years ago about 60% of researchers said they would share their raw data if asked; now just 45% do. Journals’ growing insistence that at least some raw data be made available seems to count for little: a recent review by Dr Ioannidis which showed that only 143 of 351 randomly selected papers published in the world’s 50 leading journals and covered by some data-sharing policy actually complied.

Critics of global warming have had problems getting at data before, as Nature reported here:

Since 2002, McIntyre has repeatedly asked Phil Jones, director of CRU, for access to the HadCRU data. Although the data are made available in a processed gridded format that shows the global temperature trend, the raw station data are currently restricted to academics. While Jones has made data available to some academics, he has refused to supply McIntyre with the data. Between 24 July and 29 July of this year, CRUreceived 58 freedom of information act requests from McIntyre and people affiliated with Climate Audit. In the past month, the UK Met Office, which receives a cleaned-up version of the raw data from CRU, has received ten requests of its own.

Why would scientists hide their data? Well, recall that the Climategate scandal resulted from unauthorized release of the code used to generate the data used to promote global warming alarmism. The leaked code showed that the scientists had been generating faked data using a “fudge factor”.

Elsewhere, leaked e-mailed from global warmists revealed that they do indeed suppress articles that are critical of global warming alarmism:

As noted previously, the Climategate letters and documents show Jones and the Team using the peer review process to prevent publication of adverse papers, while giving softball reviews to friends and associates in situations fraught with conflict of interest. Today I’ll report on the spectacle of Jones reviewing a submission by Mann et al.

Let’s recall some of the reviews of articles daring to criticize CRU or dendro:

I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting (Briffa to Cook)

If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, (Cook to Briffa)

Recently rejected two papers (one for JGR and for GRL) from people saying CRU has it wrong over Siberia. Went to town in both reviews, hopefully successfully. (Jones to Mann)

One last quote from the Economist article. One researcher submitted a completely bogus paper to many journals, and many of them accepted it:

John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication.

Dr Bohannon’s sting was directed at the lower tier of academic journals. But in a classic 1998 study Fiona Godlee, editor of the prestigious British Medical Journal, sent an article containing eight deliberate mistakes in study design, analysis and interpretation to more than 200 of the BMJ’s regular reviewers. Not one picked out all the mistakes. On average, they reported fewer than two; some did not spot any.

The Economist article did not go into the problem of bias due to worldview presuppositions, though. So let me say something about that.

A while back Casey Luskin posted a list of problems with peer review.

Here was one that stuck out to me:

Point 5: The peer-review system is often biased against non-majority viewpoints.
The peer-review system is largely devoted to maintaining the status quo. As a new scientific theory that challenges much conventional wisdom, intelligent design faces political opposition that has nothing to do with the evidence. In one case, pro-ID biochemist Michael Behe submitted an article for publication in a scientific journal but was told it could not be published because “your unorthodox theory would have to displace something that would be extending the current paradigm.” Denyse O’Leary puts it this way: “The overwhelming flaw in the traditional peer review system is that it listed so heavily toward consensus that it showed little tolerance for genuinely new findings and interpretations.”

Recently, I summarized a podcast on the reviewer bias problem featuring physcist Frank Tipler. His concern in that podcast was that peer-review would suppress new ideas, even if they were correct. He gave examples of this happening. Even a paper by Albert Einstein was rejected by a peer-reviewed journal. Elsewhere, Tipler was explicitly told to remove positive references to intelligent design in order to get his papers published. Tipler’s advice was for people with new ideas to bypass the peer-reviewed journal system entirely.

Speaking about the need to bypass peer-review, you might remember that the Darwinian hierarchy is not afraid to have people sanctioned if they criticize Darwinism in peer-reviewed literature.

Recall the case of Richard Sternberg.

Excerpt:

In 2004, in my capacity as editor of The Proceedings of the Biological Society of Washington, I authorized “The Origin of Biological Information and the Higher Taxonomic Categories” by Dr. Stephen Meyer to be published in the journal after passing peer-review. Because Dr. Meyer’s article presented scientific evidence for intelligent design in biology, I faced retaliation, defamation, harassment, and a hostile work environment at the Smithsonian’s National Museum of Natural History that was designed to force me out as a Research Associate there. These actions were taken by federal government employees acting in concert with an outside advocacy group, the National Center for Science Education. Efforts were also made to get me fired from my job as a staff scientist at the National Center for Biotechnology Information.

So those are some of the issues to consider when thinking about the peer-review process. My view is that peer-reviewed evidence does count for something in a debate situation, but as you can see from the Economist article, it may not count for as much as it used to. I think my view of science in general has been harmed by what I saw from physicist Lawrence Krauss in his third debate with William Lane Craig. If a scientist can misrepresent another scientist and not get fired by his employer, then I think we really need to be careful about the level of honesty in the academy.

Filed under: Commentary, , , , , , , , , , , , , , , , , , , ,

30% of gorilla genome contradicts Darwinian prediction of human and ape phylogeny

The latest episode of ID the Future discusses a recent (March 2012) paper about the gorilla genome.

Details:

On this episode of ID The Future, Casey Luskin discusses how the recent complete sequencing of the gorilla genome has challenged conventional thinking about human ancestry and explains what neo-Darwinists are doing to try to minimize the impact of this new information. Says Luskin: “There is not a clear signal of ancestral relationships that is coming out of the gorilla genome once you add it into the mix.” Tune in to hear about this interesting development!

The MP3 file is here.

Rather than summarize this short 10-minute podcast, I wanted to excerpt a post of Evolution News about it.

Excerpt: (links removed)

A whopping 30% of the gorilla genome — amounting to hundreds of millions of base pairs of gorilla DNA — contradicts the standard supposed evolutionary phylogeny of great apes and humans. That’s the big news revealed last week with the publication of the sequence of the full gorilla genome. But there’s a lot more to this story.

Eugenie Scott once taught us that when some evolutionary scientist claims some discovery “sheds light” on some aspect of evolution, we might suspect that’s evolution-speak for ‘this find really messed up our evolutionary theory.’ That seems to be the case here. Aylwyn Scally, the lead author of the gorilla genome report, was quoted saying, “The gorilla genome is important because it sheds light on the time when our ancestors diverged from our closest evolutionary cousins around six to 10 million years ago.” NPR titled its story similarly: “Gorilla Genome Sheds Light On Human Evolution.” What evolutionary hypothesis did the gorilla genome mess up?

The standard evolutionary phylogeny of primates holds that humans and chimps are more closely related to one-another than to other great apes like gorillas. In practice, all that really means is that when we sequence human, chimp, and gorilla genes, human and chimp genes have a DNA sequence that is more similar to one-another’s genes than to the gorilla’s genes. But huge portions of the gorilla genome contradict that nice, neat tidy phylogeny. That’s because these gorilla genes are more similar to the human or chimp version than the human or chimp versions are to one-another. In fact, it seems that some 30% of the gorilla genome contradicts the standard primate phylogeny in this manner.

The Evolution News post then cites New Scientist and Nature’s comments on the study.

This isn’t the first time we’ve heard about a study like this – the last time was about the chimpanzee genome and the paper was published in Nature – the most prestigious peer-reviewed journal.

Filed under: News, , , , , , , , , , , , , , , , , , , , , , ,

Wintery Tweets

RSS Intelligent Design podcast

  • An error has occurred; the feed is probably down. Try again later.

RSS Evolution News

  • An error has occurred; the feed is probably down. Try again later.
Click to see recent visitors

  Visitors Online Now

Page views since 1/30/09

  • 4,504,502 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,155 other followers

Archives

Follow

Get every new post delivered to your Inbox.

Join 2,155 other followers

%d bloggers like this: