Education

How to interpret scientific claims and data – 10 tips from Garth Rapeport

Garth Rapeport is co-founder and CEO of Pulmocide Ltd, a biotech developing inhaled antifungal medicines for life-threatening lung conditions. 

A major problem within science and medical journalism and coverage is the almost inevitable tendency for publications to seize a piece of research and hype it up until its actual meaning is lost. 

Think about how many articles you see, even in mainstream media, exclaiming confidently that ‘coffee kills’, only to completely contradict themselves the next day. This injudicious, casual use of scientific data is misleading at best, and potentially dangerous at worst. 

How to get the most out of scientific data and claims

This kind of reporting and analysis of scientific research data misleads both members of the public, and potentially those who make and change policies. The scientific community must try to mitigate this kind of hype where possible. 

In 2013, a group of researchers wrote about exactly this problem in the scientific journal Nature. In their article, they give various pointers, tips and ideas that readers should bear in mind when reading scientific journals, ranging from being aware of potential bias to data manipulation. 

1.Measurements will always be imperfect

Any scientific measurements carried out by researchers will inevitably be imperfect. For example, two researchers measuring the same length of a shelf, using exactly the same instrument of measurement, will give two very slightly different readings. This is important to remember when looking at any research data – there can be no ‘perfect’ measurements or proofs. 

In addition, everything is always in a state of flux. Often, there will be many different causes combining to create certain effects. Accurately working out the real, true reason for the variation is extremely difficult. Anyone reading a scientific paper, or an article based on the research, should bear this in mind. 

2. All research is biased to some extent

Bias is usually unintentional in scientific research, but can also be intentional, of course. If an experiment isn’t designed in the most efficient and accurate way, then its results will show bias in a certain direction. 

Compare it to a poll of voters – if a poll only samples Conservatives rather than Labour supporters, then it will not reflect a true picture of the country’s real voting preferences or political opinion. 

Clinical trials that fail to adhere to the ‘double blind’ format are more likely to show biased results. In the context of medical experiments, this means that neither the subjects nor the researchers know who are being treated with the placebo and who is receiving the treatment. 

3. Correlation and causation

Look for whether the author of the piece says that correlation does not imply causation. This is incorrect and is more accurately parsed as ‘correlation does not necessarily imply causation. 

It is possible that the relationship is a causal one. Readers should always be aware of this and look for other explanations. A good example is the correlation between pancreatic cancer and coffee. In real life, some coffee drinkers also drink alcohol, and this could be the cause of the pancreatic cancer rather than the coffee. 

4. Be aware of exaggerated results

It’s extremely common for scientific studies, and articles written about them, to exaggerate a specific result. This can be by genuine error, or design. Either way, these exaggerations are often shown up when other research clearly shows a less dramatic result. 

Readers of a scientific paper that confidently asserts that a certain drug cures 50% of people, may well discover that such a result is not replicated in subsequent scientific studies. 

This is why a single paper or article should never be used as a complete source for a scientific claim. Extrapolating false equivalencies from data is also a common theme to look out for. 

5. Did the research use a control group? 

If there is no evidence that a medical experiment used a control group, then the data should be considered inaccurate. An experiment with no control group can’t be thought of as a real experiment. 

Subjects used in experiments should also be randomly selected. This helps the researchers to eliminate the potential for any unconscious bias. A subject cannot, for example, request to receive the treatment over the placebo. This would render the results useless.

6. Seek out study replications

Before assuming data shown in an experiment or research project is 100% accurate and can be used as a solid basis for an article or anything else, look for evidence from research replication. 

Has the study’s result been replicated using different subjects, across a different population? The ideal scenario is finding other studies by different teams of researchers that back-up the results. 

7. Remember that all researchers are human!

Research teams, scientists and medical professionals are all human beings. Therefore, they make errors and they follow unconscious biases. It’s impossible to expect otherwise, and this must always be kept in mind. 

8. Understand how statistics work

For scientific data to be meaningful, it has to be statistically significant. This means that it can be reasonably assumed not have occurred simply by chance. Scientists commonly hold a standard whereby they accept that there exists a 5% chance that the results of their research arise just from luck. 

If the results of an experiment don’t meet this threshold, that doesn’t automatically mean nothing significant occurred. It’s possible that the effect is significant and real, but the sample tested wasn’t big enough to properly use statistics for analysis. 

However, the reverse is also true. Just because an experiment yields results that can be determined as significant statistically, it doesn’t necessarily follow that the finding is important.

9. Most people are incapable of accurate risk perception

Most people can’t assess risk accurately. For example, while many people are terrified of the risks posed by terrorism, they don’t express the same worries about driving. This is despite the fact that car accidents kill around 1.25 million people every year, according to the Association for Safe International Travel. Terrorism, by contrast, caused 18, 814 deaths globally in 2018, according to the Global Terrorism Index.

10. Be wary of how data is used, and who is using it 

Big Data increasingly alters our perception of what we see, red and hear. And a scientific experiment that is carried out even though there is no initial hypothesis, can be used to ‘prove’ just about anything. 

For example, as there are now so many DNA samples available for analysis, it is possible in theory to compare many thousands of people’s strands and find some statistically significant differences. However, if scientists are just using massive amounts of data and seeing what they can find by accident, then this can lead to extremely misleading reports. 

It’s also worth remaining sceptical when confronted with extreme claims using data. For example, any statistic that seems shocking, should be treated with caution. It’s likely that there are various explanations behind it, and not just the one the writers are trying to convince you of. 

Read more from Garth Rapeport at his blog.

Jess Young

Jess is a writer at the UK's largest independent press agency SWNS. She runs women's real-life magazine Real-Fix.com, as well as contributing articles and features to all of the major titles and digital publications.

Published by