Abstract
In the mid 1990’s, evidence emerged that the bicycle helmet law had failed to reduce the risk of cycling injuries. Governments responded by commissioning “studies” exaggerating the benefits of helmets while ignoring their tendency to increase accidents. Such policy-driven studies are quoted in official government communications to justify its controversial policy.
These deceitful practices waste precious resources while real cycling safety is neglected. This might explain why Australia has one of the worst cycling safety record among developed countries, with a fatality rate 5 times greater and a serious injury rate 22 TIMES higher than best practice.
The hidden side of scientific research
Generally, research is conducted with scientific discipline, with a purpose to further scientific knowledge. However, sometimes the entity funding the research has a vested interest in a certain outcome, compromising the independence of the research.
This has been the case for some research funded by drug companies for example, where the research results are used to sell or promote drugs. Powerful government and corporate interests are increasingly influencing scientific research through controlling its funding, as reported by Brian Martin from the University of Wollongong:
“In the routine practice of scientific research, there are many types of misrepresentation and bias which could be considered dubious. However, only a few narrowly defined behaviours are singled out and castigated as scientific fraud. A narrow definition of scientific fraud is convenient to the groups in society — scientific elites, and powerful government and corporate interests — that have the dominant influence on priorities in science. Several prominent Australian cases illustrate how the denunciation of fraud helps to paint the rest of scientific behaviour as blameless”
Government is a large source of research funding. In the mid 1990s, evidence emerged that the helmet law had failed to improve safety. Several researchers reported this. A bicycle activist who promoted helmet wearing and initially believed in the helmet law looked at the data and concluded:
“It is fair to say that, so far, there is no convincing evidence that Australian helmet legislation has reduced the risk of head injury in bicycle crashes.”
Governments responded by commissioning “studies” defending its policy, then quoting such policy-driven studies to justify its policy.
The Henderson report: a policy-driven study
In 1995, the Motor Accidents Authority of New South Wales (Australia) commissioned a policy-driven study that became known as the Henderson Report. Written in an authoritative style, it pretends to be scientific. That illusion breaks down when one notices two unusual features for a scientific report:
- It contains many strongly worded, unqualified assertions. Researchers with scientific integrity tend to be very careful with their assertions, making sure they are backed up by sufficient evidence and qualified by their context. You rarely see unsupported or unqualified assertions in a scientific report.
- Assertions are not annotated, making it impossible to verify the data source.
- “Helmet design and construction is based on known mechanism of head and brain injury”. Not true. This ignores rotational acceleration, the main cause of brain injury.
- “At the very minimum helmets halves the risk of head injury”. Not true. Helmets can aggravate brain injury through rotational acceleration.
- “Those who do not wear helmets are several times more likely to sustain injury to the brain tissue”. Not true. Ignores that helmet wearers are more likely to have accidents.
- “in Victoria, the number of bicyclists with head injuries decreased by 48 per cent”. Misleading. Ignores the decline in cycling and head injury reduction for pedestrians.
- “The vast majority of head impacts occurring … are easily survivable if a Standards-approved helmet is worn”. Not true. Helmets are not designed to protect in a serious accident.
- “No studies have come to conclusions contrary to the above”. Not true. The Hillman report (1992), one of the most comprehensive and famous review of helmet research at the time, is ignored.
Many assertions in this report are false or misleading. The report ignores evidence that does not support helmets.
This “study” has received much criticism, notably in the vehicular cyclist:
“Individuals and organizations zealously pushing mandatory helmet use for cyclists are continuing to churn out reams of propaganda. One of the more voluminous efforts is Michael Henderson’s “The Effectiveness of Bicycle helmets: A Review” 1995, a politically motivated paper prepared on behalf of the Motor Accidents Authority of New South Wales, Australia apparently in a desperate effort to justify the State’s botched law outlawing cyclists who ride without a helmet.
Henderson’s report recycles much of the same old material that’s been cited by others over the years. The studies he references fail to provide a real world context, and to show any particular understanding of cycling. Although presenting bicycle head injuries as a worldwide problem, Henderson neglects to provide us with any sense of the size or scope of it.”
Contrast the bold narrow claims from this report with the more comprehensive approach from the Hillman report:
“By wearing helmets, cyclists are at best only marginally reducing their chances of being fatally or seriously injured in a collision with a motor vehicle which is the predominant cause of these injuries. Even the most expensive ones provide little protection in these circumstances. Moreover, the argument in favour of helmets would have validity if there were proof that behaviour does not change in response to perceived risk. But there is no such proof. Safety devices encourage higher levels of risk-taking. As a result, cyclists are likely to ride less cautiously when wearing a helmet owing to their feeling of increased security. After all, the message of the advocates of helmet wearing is that such a practice will protect the cyclist’s head adequately in the event of any accident, not just a minor one when cyclists are hit by very slow-moving vehicles or fall off and hit their heads on the ground. Cyclists may be less likely to have an accident if they are not wearing a helmet, and are therefore riding with greater care owing to an enhanced sense of their vulnerability.”
Subsequent policy-driven studies
Official evaluations of the helmet laws commonly employ biased selection of research and statistics, resulting in benefits being unduly attributed to them and adverse effects (like an increase in the risk of accidents) ignored. Despite data showing that cycling safety has lagged pedestrian safety since the helmet law, policy-driven studies found a way to claim that the helmet law was a success. For example, a 1997 report from a government agency made two misleading claims:
- “Cycling casualties decreased after the helmet law”. This ignored the decrease in the cycling. Per cyclists, cycling casualties increased.
- “A ‘strong correlation’ between higher helmet wearing rates and lower casualties”. The underlying data indicates the opposite.
The helmet law was introduced as a part of a package of road safety measures including a crackdown on speeding and drink driving. The number of cyclists reduced significantly. Any assessment of the helmet law must take into account these confounding factors. Yet many government-funded “studies” like this one did NOT adjust for this, attributing all apparent improvements to the helmet law. Such negligence is difficult to comprehend. How could the “researchers” miss such basic adjustments? It is odd that these mistakes favor the legislation while the government funds the “research”.
Most medical case-studies, claiming that the helmet law has been effective, are also riddled with errors. They typically start from the assumption that helmets save lives and attempt to “prove” it by selectively fishing for data that supports their predetermined conclusion.
Some government studies not only contain false claims, but fail to rectify them after being corrected.
In 2000, the Australian Transport Safety Bureau (ATSB), a federal government agency, released a meta-analysis, that claimed to provide
“overwhelming evidence in support of helmets for preventing head injury and fatal injury“.
This claim was rebutted in 2003, highlighting that:
“the meta-analysis … take no account of scientific knowledge of [brain injury] mechanisms”
The ATSB did not reply to the rebuttal, thus giving up on its claim. Despite being discredited, this analysis is still used by the government to defend the helmet law, claiming that helmets reduce the risk of head injury by 60%.
In 2011, a meta-analysis re-assessed this ATSB meta-analysis. It concluded:
“This paper … was influenced by publication bias and time-trend bias that was not controlled for. As a result, the analysis reported inflated estimates of the effects of bicycle helmets …
According to the new studies, no overall effect of bicycle helmets could be found when injuries to head, face or neck are considered as a whole.“
Publication bias is the tendency of contradictory or inconclusive results not to be published, resulting in a literature formed of apparently consistent findings that exaggerate the actual effect. Time-trend bias is the tendency to pick a specific time period that exaggerates the actual effect.
Since then, various government agencies have wasted more taxpayers money by commissioning policy-driven “studies”, attempting to obfuscate the failure of the helmet law.
A prominent university seems to be moving away from policy-driven studies, as reported here:
“the faculty wished to move away from conducting sponsored policy studies for government because this did not generate valuable intellectual property“
The dire consequences of this dishonest behavior
Independent researchers who have studied the result of the helmet law have come to different conclusions.
Dorothy Robinson, a researcher from the University of Armidale, said:
“mandatory bicycle helmet laws increase rather than decrease the likelihood of injuries to cyclists …
Having more cyclists on the road is far more important than having a helmet law, for many reasons …
[the] governments [which introduced the helmet laws] do not like to admit they’ve made mistakes”.
Bill Curnow, once a scientist from the CSIRO, wrote as a conclusion in a scientific article:
“Compulsion to wear a bicycle helmet is detrimental to public health in Australia but, to maintain the status quo, authorities have obfuscated evidence that shows this.”
An independent group of public health and transport practitioners and researchers wrote in their report:
“The failure of mass helmet use to affect serious head injuries, be it in falls or collisions, has been ignored by the medical world, by civil servants, by the media, and by cyclists themselves. A collective willingness to believe appears to explain why the population-level studies are so little appreciated. …. The disconnect between received wisdom and the facts is stark.“
These are strong words from independent researchers, revealing frustration at the government unwillingness to admit they made a mistake.
For how long can policy-driven studies sustain failed policies?
In what way can this be justified as good use of taxpayers money?
Such deceptive studies add little to scientific knowledge. They exaggerate the reduction of minor injuries while ignoring the increased risk of serious injuries. Such deceitful practices have dire consequences. They tend to mislead policy makers towards false “solutions” to cycling safety, while neglecting more effective measures, like reducing the risk of accidents.
This might explain why Australia has one of the worst cycling safety record among developed countries, with a fatality rate 5 times greater and a serious injury rate 22 TIMES greater than best practice.
The practice of commissioning dubious studies to defend its failed policy while neglecting more effective safety measures constitutes an abuse of public trust among the bureaucrats entrusted with cycling safety.