This site is not maintained. Click here for the new website of Richard Dawkins.

← The truth wears off

The truth wears off - Comments

Bernard Hurley's Avatar Comment 1 by Bernard Hurley

For many scientists, the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved?

This is absurd. I think there are two things going on here.

1) We can all be guilty of confirmation bias. At the beginning of investigation of any field it can have the effect of, for instance, finding methodological faults in studies that don't go the way we expect, but being blind to ones in studies that do. I am not saying that there is necessarily anything dishonest going on, but we must be aware of this possibilities like this.

2) As more experiments are done, these biases gradually get corrected for and we approximate better the truth.

Far from calling into question the scientific method, this illustrates one of the great strengths of the scientific method.

Thu, 06 Jan 2011 23:53:39 UTC | #574313

Starcrash's Avatar Comment 2 by Starcrash

What's the saying... "you can't study an environment without changing it"? Something like that. That's my take on the cause of this effect.

A very weird phenomenon, but it's good that someone's calling an area of science into question. You can't improve upon a method unless you spot its weaknesses first.

Fri, 07 Jan 2011 00:03:27 UTC | #574316

Neodarwinian's Avatar Comment 3 by Neodarwinian

Overstated?

Too much peripheral stuff here. Light still bends around stars, allele frequencies change over time in a population of organisms and ionic bonding is still happening.

Fri, 07 Jan 2011 00:08:44 UTC | #574318

Alan4discussion's Avatar Comment 4 by Alan4discussion

As a result, second-generation antipsychotics had become one of the fastest-growing and most profitable pharmaceutical classes. By 2001, Eli Lilly’s Zyprexa was generating more revenue than Prozac. It remains the company’s top-selling drug.

Interesting effects in non-exact sciences have often been observed where large sums of money are involved. On numerous occasions the financial effects have been found to have more sway than scientific studies. I can't comment on this specific issue.

Fri, 07 Jan 2011 00:11:47 UTC | #574322

Starcrash's Avatar Comment 5 by Starcrash

Interesting effects in non-exact sciences have often been observed where large sums of money are involved. On numerous occasions the financial effects have been found to have more sway than scientific studies. I can't comment on this specific issue.

That's what the author above found, too. There's a lot of motivation for scientists to see what isn't there in their experiments, and when the motivation wears off then so do the results. It's a good reason for double-blind experiments where possible.

I think Michael Crichton was pretty upset about this in his final years. In "State of Fear" his premise was that scientists believed false things (or drew the wrong conclusions) based on the general consensus and depending on who hired them to test. While his book may not have helped the field of global warming, his premise was a strong one - if someone pays you to find a certain result, you're more likely to find it.

Fri, 07 Jan 2011 00:19:16 UTC | #574325

biorays's Avatar Comment 6 by biorays

This article smacks of all kinds of human experiential and existential metaphor; Sensory satisfaction promotes desire till applied to an overload continuum; Desirable things posit consumerism till supplanted or exhausted; Pain avoidance is escapism or path of least resistance till the path becomes arduous or tedious; Religions reversal of the aforementioned postpones illusory satisfactions, till death, so potentially extrapolates avoiding the negative consequences of overfamiliarity with existential experiences (carrot riding a donkey).

The old 'Familiarity breeds contempt'.

The science of the sensory consciousness!

Fri, 07 Jan 2011 00:23:56 UTC | #574327

aquilacane's Avatar Comment 7 by aquilacane

the data presented at the Brussels meeting made it clear that something strange was happening

Science is wonderful; I love how it never take yes for an answer.

Fri, 07 Jan 2011 03:44:13 UTC | #574387

SGP's Avatar Comment 8 by SGP

Disappointing to read through to the concluding remark I hoped I wouldn't find:

"Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe".

The effects of 'human expectation' on the interpretation of data discussed are interesting, but don't warrant such hyperbole.

[sigh]

Fri, 07 Jan 2011 04:01:43 UTC | #574392

Michael Gray's Avatar Comment 9 by Michael Gray

...claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name

Yes it does!
The official name is Bullshit.

Fri, 07 Jan 2011 05:41:18 UTC | #574412

Steve Zara's Avatar Comment 10 by Steve Zara

That's what the author above found, too. There's a lot of motivation for scientists to see what isn't there in their experiments, and when the motivation wears off then so do the results. It's a good reason for double-blind experiments where possible.

That's a serious accusation. Scientists' careers depend on publications - on getting results. The quality of work is assessed by peer review, and of course scientists know about things like double-blind experiments.

I think Michael Crichton was pretty upset about this in his final years. In "State of Fear" his premise was that scientists believed false things (or drew the wrong conclusions) based on the general consensus and depending on who hired them to test. While his book may not have helped the field of global warming, his premise was a strong one - if someone pays you to find a certain result, you're more likely to find it.

Criton deserves no praise, and his books got distinctly sillier with time. I remember finding "Prey" so bad I read it as a parody, and laughed much.

There is no point paying scientists to find a certain result, because that's not what science does. It's equivalent to hiring a detective - you don't pay to back a position, you pay to find out the truth.

It's an utter waste of time and money to pay scientists to find a certain result. You can just make it up with a bit of work on a spreadsheet.

Scientists also don't get very far in general research if they publish "me too!" work. The point of their work is to find out something new about a situation.

Fri, 07 Jan 2011 05:53:25 UTC | #574414

cixelsyd5's Avatar Comment 11 by cixelsyd5

"Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe".

This seems to be similar to an argument that I was told was in the newest Christian defense attempt for god in the book The Reason for God by Timothy Keller. The argument says that "In the last part of Dawkins' TGD he admits that since we are the product of natural selection, we can't completely trust our own senses. After all, evolution is interested only in preserving adaptive behavior, not true belief." He goes on to ask if we can't trust our belief forming faculties to tell us the truth, how can we trust them to tell us truth about anything including evolutionary science?

He then quotes Darwin as saying, "The horrid doubt always arises whether the convictions of man's mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy."

This argument sends my bullshit-o-meter screaming but I am not sure how to respond to it...

Fri, 07 Jan 2011 06:22:08 UTC | #574424

Rational_G's Avatar Comment 12 by Rational_G

This article is complete rubbish. Much ado about nothing I'm afraid. Mr. Lehrer has made a career of stirring up non-issues and jumping to "alarming conclusions" when there are none.

Fri, 07 Jan 2011 06:24:54 UTC | #574426

Peter Grant's Avatar Comment 13 by Peter Grant

No one ever said science is easy. I'd be more concerned if there wasn't a decline effect in at least some cases.

When other labs repeated Schooler’s experiments, they got a similar spread of data, with a distinct downward trend. “This was profoundly frustrating,” he says. “It was as if nature gave me this great result and then tried to take it back.” In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”

Schooler sounds a rather nutty.

Fri, 07 Jan 2011 06:33:04 UTC | #574429

Richard Dawkins's Avatar Comment 14 by Richard Dawkins

This article has grandiloquent ideas and gets above itself. It is a grab bag of miscellaneous worries that any scientist who relies heavily on statistics should be aware of, especially in fields such as psychology where there are numerous complications. For this reason, it is worth reading, and I am glad, on balance, that we have posted it here. In particular, I think it is an idea worth considering that, at least in sensitive fields, we might initiate a new set of rules whereby scientists deposit, IN ADVANCE, their intention to run a particular experiment, with a particular number of trials etc, for public scrutiny BEFORE doing the experiment.

But what this article does NOT demonstrate is any kind of general, cosmic, Sheldrakian law of nature, such that the results of later repeats of an experiment are systematically, and mysteriously, different from earlier runs of the same experiment.

Richard

Fri, 07 Jan 2011 07:38:32 UTC | #574435

sbooder's Avatar Comment 15 by sbooder

At first I thought hmm very interesting, but by the end you had a feeling where it was all going, and at the end it did. Maybe all science is wrong!

Sorry but did not this Journalist display by the end of the piece the very publication bias he was in part writing about.

He also gave very detailed info on the decline effect experiments but when it came to the Gravity bit towards the end, someone made some bore holes, gravity might be wrong…what, what were the results there please?

Fri, 07 Jan 2011 07:43:11 UTC | #574436

Peter Grant's Avatar Comment 16 by Peter Grant

I do like this idea though:

In a forthcoming paper, Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”

Fri, 07 Jan 2011 08:06:55 UTC | #574441

Pete H's Avatar Comment 17 by Pete H

@sbooder

Yes, gravity is apparently wrong. No idea of the specifics but I’ve heard that there’s multiple indications now from diverse experiments. Don’t panic, it’s not wrong enough to make much of a difference. You will still be overweight after xmas indulgence. And no one pretends that gravity is understood – only measured and predicted (reasonably approximately). The discrepancies may yet be found to lie with the technology of precision measurements rather then the underlying phenomena.

Regarding the ‘inexact’ sciences (ignoring the fact that they’re all inexact, just some are less inexact than others) it may be timely to review Hayek’s statement about scientism:

http://nobelprize.org/nobel_prizes/economics/laureates/1974/hayek-lecture.html

From an empirical perspective the implications of this posted article are that there may as well be some kind of general mysterious law of nature that the regency of an experiment affects the replicability of its results.

This reminds me of an argument I had with a physiotherapist about acupuncture – which I lost. The winning argument being that all the research supports that acupuncture works. My ineffective counter argument being the essence of science is to explain how and why things work – not just that things apparently work in experiments. Empiricism is useless without a consistent and meaningful theoretical explanation. There’s no such thing as regression to the mean being an explanation. Regression is a description of statistical phenomena. As the article says, this apparently mysterious phenomena is a result of systematic bias not statistical randomness.

It doesn’t matter what statistical tests are applied or the statistically significant volume of data in any particular published data set. What they should be looking at are the data sets that don’t surface in published reports because they weren’t interesting enough – which they obviously can’t do.

Negative initial findings when a new idea is tested don’t get reported. You have to have a positive finding to kick off the process before you can even mistakenly believe you’ve got an initial finding. So even if all the evidence to date is overwhelmingly negative this non-evidence just doesn’t accumulate and therefore doesn’t exist. Only the first positive result counts, and it’s downhill from there. Some kind of proposed experimental database might be useful to minimise this problem.

My 12 year old son threw 10 heads in a row on his first attempt at replicating Derren Brown’s famous coin tossing experiment. So he started counting from then – but he never quite replicated this performance. Luckily he didn’t publish. What we don’t see are the 10,000 other 12 years olds who failed to throw 10 heads in a row on their first attempted replication.

Fri, 07 Jan 2011 08:18:01 UTC | #574443

rod-the-farmer's Avatar Comment 18 by rod-the-farmer

Hah. Well THAT explains things. The decline effect must have operated on prayer. In the early days, amputees WERE healed after prayer. A few of them, anyway. At least ONE, surely ? But as time passed, and more prayers were applied (if that is the word) the effect dropped off, such that no prayers for amputees are now answered.

So.....what else about religion can we point to, where this decline effect has operated ?

Fri, 07 Jan 2011 08:34:30 UTC | #574445

sbooder's Avatar Comment 19 by sbooder

Yes, gravity is apparently wrong. No idea of the specifics but I’ve heard that there’s multiple indications now from diverse experiments. Don’t panic, it’s not wrong enough to make much of a difference. You will still be overweight after xmas indulgence. And no one pretends that gravity is understood – only measured and predicted (reasonably approximately). The discrepancies may yet be found to lie with the technology of precision measurements rather then the underlying phenomena.

Ho, don't worry I am not panicking, Gravity is the one area where any experiment that shows variables will have to disprove the Mathematics behind Gravity to be of any consequence, and I think using Newtonian maths to put men on the moon and predict planetary orbits and many more things has proven the theory of Gravity fairly safe from nut jobs.

Fri, 07 Jan 2011 09:00:05 UTC | #574447

AtheistEgbert's Avatar Comment 20 by AtheistEgbert

Comment 12 by cixelsyd5 :

"Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe".

This seems to be similar to an argument that I was told was in the newest Christian defense attempt for god in the book The Reason for God by Timothy Keller. The argument says that "In the last part of Dawkins' TGD he admits that since we are the product of natural selection, we can't completely trust our own senses. After all, evolution is interested only in preserving adaptive behavior, not true belief." He goes on to ask if we can't trust our belief forming faculties to tell us the truth, how can we trust them to tell us truth about anything including evolutionary science?

He then quotes Darwin as saying, "The horrid doubt always arises whether the convictions of man's mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy."

This argument sends my bullshit-o-meter screaming but I am not sure how to respond to it...

All arguments use language, and language contains meaning, and that meaning simply wouldn't exist without using our experience. All arguments that doubt that a reality exists, or that our senses are completely unreliable are self-defeating.

That is at least one way in defeating such arguments.

Fri, 07 Jan 2011 09:29:15 UTC | #574452

mmurray's Avatar Comment 21 by mmurray

Comment 19 by rod-the-farmer :

Hah. Well THAT explains things. The decline effect must have operated on prayer. In the early days, amputees WERE healed after prayer. A few of them, anyway. At least ONE, surely ? But as time passed, and more prayers were applied (if that is the word) the effect dropped off, such that no prayers for amputees are now answered.

So.....what else about religion can we point to, where this decline effect has operated ?

Well two thousand years ago we know you could turn water into wine and raise the dead. If you plotted the number of miracles against time you could find out what kind power decay it was following.

Michael

Fri, 07 Jan 2011 10:09:44 UTC | #574463

Mark Jones's Avatar Comment 22 by Mark Jones

Comment 15 by Richard Dawkins

In particular, I think it is an idea worth considering that, at least in sensitive fields, we might initiate a new set of rules whereby scientists deposit, IN ADVANCE, their intention to run a particular experiment, with a particular number of trials etc, for public scrutiny BEFORE doing the experiment.

I'm glad to hear you say this. As a layman, I've often wondered about this element of the scientific process. Ben Goldacre, as I'm sure you know, has been calling for an effective well policed central clinical trials register, for all trials started, because of the tendency to 'bury bad data'. Ben gives a list, here, of how such trials can be manipulated:

  • Choose an unrepresentative group
  • Choose an ‘odd’ dose for your comparator drug
  • Ignore side effects
  • Peeking and stopping early
  • Ignore drop outs
  • Clean up the data
  • Torture the data
  • Bury bad data
  • Produce duplicates of positive trials
  • Fri, 07 Jan 2011 10:21:18 UTC | #574464

    hungarianelephant's Avatar Comment 23 by hungarianelephant

    Comment 23 by Mark Jones :

    Ben Goldacre, as I'm sure you know, has been calling for an effective well policed central clinical trials register, for all trials started, because of the tendency to ' bury bad data'.

    I have a lot of sympathy with this argument. But you then have to ask: what is the point of the regulator?

    In Europe and the US at least, you have to submit all the trials to the regulator, including the ones you stopped or whose results you didn't like. As far as I'm aware, the study protocols also have to be deposited with the regulator for approval in advance.

    So what Ben Goldacre is really complaining about is that the scientific community at large doesn't get the material it needs to second-guess the regulator's decisions.

  • Choose an unrepresentative group
  • This definitely happens, and I don't think you can entirely blame the people who design the studies.

    What most people don't understand is that diagnosis is a very inexact process. It is nearly always tentative. Even testing for a specific pathogen doesn't eliminate the possibility of other causes for the same symptoms.

    The people who get into a study typically have pretty firm diagnoses. This is what the study designers want, obviously, because you don't want non-responders buggering up your effectiveness stats. Also, subjects will typically be quite well understood, since the clinics assessing them have a very good idea of what they are looking at.

    Real world diagnosis can be much flakier. It's self-evident that someone wrongly diagnosed is much less likely to respond to a particular treatment than those for whom it is actually designed. So you would expect to see lower real-world efficacy for many drugs. Taking side effects into account, they might even make people worse on average - this doesn't mean the initial study was wrong, just that things don't quite work out the same way in the real world.

    Actually, flaky diagnosis can be a problem in studies too. There are companies making money out of trying to eliminate this factor to give trials a better chance of success. I have some ethical concerns about this - the best answer to which would be to force the application of "filtering" to real world patients as well as study patients.

    I'm not suggesting that misdiagnosis is the only problem, nor that pharma companies never engage in (perfectly legally) tweaking the protocol to try to improve results, but just to make the point that if you don't insist on consistently rigorous levels of science - which is probably impossible - you should expect to get different results sometimes.

    Fri, 07 Jan 2011 12:45:55 UTC | #574500

    Mark Jones's Avatar Comment 24 by Mark Jones

    Comment 24 by hungarianelephant

    Sorry, the link in my last comment didn't seem to work, so here it is again.

    But you then have to ask: what is the point of the regulator?

    I'm not familiar with the regulatory regime, but I imagine Goldacre would ideally like a universal register, though who to administer it, I don't know.

    Re unrepresentative groups, the link above says:

    Goldacre says trials in one Cox-II inhibitor drug were conducted in an entirely unrepresentative group - people in their thirties and forties.

    Younger patients are often more likely to respond positively to treatment, and less likely to show up any side effects such as cardiovascular problems.

    The more one looks at medical trials, the more difficult they seem, because of their complications, such as you describe. The attempt to achieve an objective result from something that involves a lot of patient subjectivity is bound to make this so, I guess. But the attempt is worthwhile, I think, nonetheless.

    Fri, 07 Jan 2011 13:01:33 UTC | #574505

    kriton's Avatar Comment 25 by kriton

    The problem is, who will do all the checking and replication? Peer reviewers don't replicate the experiments or analyse the raw data; they just read the paper.

    What would a researcher prefer to do? Present exciting findings of your own, or repeat someone elses experiments? If they turn out to be pretty much ok, you are unlikely to get published, at least in a high-impact journal. If they don't, you may have a period of nasty controversy ahead of you.

    And what is more likely to get funding? Replicating what others have done? I don't think so.

    Let's also remember that bigger experiments, with more mice or more subjects, cost more money.

    Publications should be more Open Access so that results are available to more people, and we also need better IT tools and databases to link together available research, tools that are also open to everyone. Again, this costs.

    I think better training in Design Of Experiments (DOE) would also be needed in many cases. For example, in an experimental setting where there are many variables, many probably still use the one-variable-at-a-time approach. But nowadays there is software that can help you to better sample and model the multi-dimensional variable space that the variables can be seen as spanning.

    Also, when I studied to become a biotech engineer, I got no real training in Good Laboratory Practice (GLP) and things like that. In retrospect, that would have been quite useful. I suspect that such training is still lacking in many cases.

    So well, we get the level of quality we pay for.

    Fri, 07 Jan 2011 13:04:43 UTC | #574506

    mmurray's Avatar Comment 26 by mmurray

    Comment 26 by kriton :

    I think better training in Design Of Experiments (DOE) would also be needed in many cases. For example, in an experimental setting where there are many variables, many probably still use the one-variable-at-a-time approach. But nowadays there is software that can help you to better sample and model the multi-dimensional variable space that the variables can be seen as spanning.

    Ah yes. I recommend this video on the topic. Very amusing. Very true.

    Michael

    Fri, 07 Jan 2011 13:14:17 UTC | #574510

    aquilacane's Avatar Comment 27 by aquilacane

    Richard wrote

    whereby scientists deposit, IN ADVANCE, their intention to run a particular experiment, with a particular number of trials etc, for public scrutiny BEFORE doing the experiment

    Could that information be used to steal an experiment and beat them to publication? What if the results can be patented? I don't know how this stuff works but I expect, scientists being human, that any chance to screw someone over for self gain will be a welcome opportunity. It's been my experience that information provided to the public must be rock-solid owned or already public domain.

    Fri, 07 Jan 2011 13:29:08 UTC | #574518

    Mark Jones's Avatar Comment 28 by Mark Jones

    Incidentally, maybe this is going a little off topic, so apologies if so, consider the Wakefield case (of which I'm sure you are aware, HE), where apparently Wakefield falsified his trial data, and hood-winked his co-authors. It's important that such frauds are guarded against.

    Fri, 07 Jan 2011 13:30:16 UTC | #574521

    debaser71's Avatar Comment 29 by debaser71

    Jonah Lehrer is a newbie. Smart kid but still a kid. When he talks about neuroscience then he has something somewhat interesting to say but this general science article is

    pretentious trash.

    Fri, 07 Jan 2011 13:40:38 UTC | #574523

    NealOKelly's Avatar Comment 30 by NealOKelly

    Comment 15 by Richard Dawkins

    I've always said (without any empiracal justification to back it up) that the most psychologists are appalling at statistics and experiemental design. It's not their fault, they're not taught it properly at university. And besides, they are usually more interested in reading the case-study about the boy whose left hand had an independent personality from the rest of his body.

    My ex-girlfriend asked me to review her final year psychology thesis. When I pointed out that she had got her statistics up-side-down she said "I can't have, because this is what my tutor did for his PhD."

    "Well yes darling, that may be so, but all that means is that his PhD mostly consists on nonsense too."

    To his credit, when challenged he conceeded that he had indeed got his stats arse-about-face. My concern is not so much that he made a mistake, but that he was awarded a PhD without anyone noticing he'd done so. As I say, psychologists are terrible good at statistics and experiment design. It a real pity, because they need to be BETTER at it than most other disciplines.

    Also, not sure how and open source register of intended trial could work in a commercial environment (pharma):

    1) The sponsors are not going to be to share information with their competitors

    2) They're hardly likely to want to continue funding trials of a drug that showed minimal effects in early trials just because they said they would in the register. They'll cut their losses and move onto something else.

    Fri, 07 Jan 2011 14:09:42 UTC | #574538